5 December 2025

Google on Using AI to Reduce Harmful Content – DRIF25 Insights

At the Digital Rights and Inclusion Forum 2025 (DRIF25) by Paradigm Initative, being held from 29th April to 1st May at the Mulungushi International Conference Centre (MICC) in Lusaka, Charles Bradley, Manager of Trust Strategy at Google, shared key insights on how Google is applying artificial intelligence to make search safer and more relevant, especially for vulnerable users.

Bradley’s session explored the capabilities of large language models (LLMs) and how they are helping Google better understand user intent at scale. One of the standout achievements highlighted was a 30% reduction in the appearance of unexpected or shocking content. This has been achieved through AI systems trained to detect sensitive queries and respond with appropriate, supportive resources. For example, when a user types something like “I wanna end it,” Google’s systems can identify this as a potential mental health crisis and immediately surface suicide prevention helplines and support networks.

The presentation also highlighted how LLMs improve the quality of search results by prioritizing highly reviewed and authoritative content. This is especially critical in video results, where Google now amplifies lived experiences and expert perspectives to ensure that users are met with accurate, empathetic, and helpful information.

Bradley emphasized Google’s commitment to ethical AI, focusing on transparency, reducing algorithmic bias, and ensuring accountability in automated content moderation. The session called for greater collaboration across sectors to protect online users, while still ensuring access to critical information.

By focusing on responsible AI deployment, Google aims to strike a balance between safety and access, ensuring that users, especially those in vulnerable situations, are empowered and protected throughout their search journeys.

Sandi

Tech Blogger & Marketer.

View all posts by Sandi →

Leave a Reply

Your email address will not be published. Required fields are marked *