
Welcome to the Q&A, an occasional bonus issue that will help you get to know the leaders, builders, and thinkers working to give us back control over our online lives. Let me know what you think and who else you’d like to hear from at [email protected].
Ganesh Shankar
Director of Product Management, User Protection
Google Privacy, Safety & Security
Ganesh has over 15 years of experience in product management, specializing in complex problems at the intersection of technology and society, including healthcare, crisis response, and online safety. Having lived and worked in Australia, the UK, Germany, and Canada while supporting globally distributed teams, he is particularly interested in exploring the societal implications of AI and its potential for creating a safer world.
1. How do you describe what you do to people who are not in the technology industry?
I’m on a mission to scale safety. Every piece of technology can be used for good or ill, and the purpose of my organization is to put the right safeguards in place to ensure that we maximize the positive benefits of new technologies while minimizing their potential for harm.
To take an example from outside the technology industry, we act as both Fire Wardens and Fire Fighters.
We spend a lot of time with subject matter experts in policy to create standards and common technologies that product teams can adopt to prevent known types of misuse. When we detect a novel risk, we work with products to mitigate the problem, and then update our standards and common technologies to protect everyone at scale.
2. What is your origin story? What made you care about this work and enter your field?
I’ve always been fascinated by the diversity of people and cultures in the world and what brings us together or pulls us apart. As someone who has immigrated to start a new life in a new country several times, I’m keenly aware of what it’s like to be on the outside of communities and what it takes to integrate while preserving what makes you, you.
I’ve always wanted to work on developing technology that helps solve real-world problems, and I gravitate to these areas.
A few years ago I worked on a project that used technology to foster healthy and sustainable communities. I learned a lot about culture and communities and became a behavioral science nerd because in these kinds of projects, where technology intersects with society, the technology part tends to be a lot easier than predicting human behavior.
I didn’t realize it at the time, but I was already working on a safety problem, for if you are looking to build a healthy community, you also need policies, processes and social norms, as well as the technology to manage them.
I fell in love with this problem space, which is inherently about finding equilibrium and making things better, while accepting that you can never solve things completely because the global and technological landscapes are always shifting. It’s an evergreen mission, in a way, and that’s humbling and exciting.
3. What book(s), paper(s), or project(s) influenced how you think about the relationship between platforms and the communities who use them?
“The Art of Community” by Charles Vogl - which was recommended to me when I started working on technology to support communities. While the book is not specific to online communities, I love the way it breaks down what defines a community and what healthy communities need to succeed.
This book and my work in communities caused me to think of communities as a form of collective personality, with every member who joins or leaves that community changing the personality a little, and you can and should expect communities to evolve over time. We can’t freeze them in place even if we wanted to. So, successful communities are ones that invest in themselves and are able to adapt to change.
4. What are you working on right now that you are most excited about?
I’m excited about the use of emerging AI capabilities to create a safer world. So much of the work of safety teams involves understanding context, and telling the difference between innocuous or harmful behavior. Large web platforms of the past decade excelled at scale but struggled with nuance; however, the ability of frontier models to reason and understand multi-modal content provides a step change in capabilities, which teams focused on safety can use to further our work.
5. If you could get everyone in this space to pay attention to one under-appreciated problem or opportunity, what would it be?
The platforms that power the modern web give developers incredible reach. You can easily create a new product that can reach tens or hundreds of millions of people in weeks. This cuts both ways: by allowing us to scale ideas and businesses faster than ever, but if those same ideas have inherent safety risks, you scale them too.
For those developing new products, we keep in mind possible failure scenarios, and not just test the “happy path” to success. For platform providers, we need to make safety inherent in the design of the platform.
To take a real-world analogy: when you build a new car, you crash test it as part of the process, because it is easier to make the design safer before scaling manufacturing. This safety by design approach is what we should strive for in online platforms as well.

