Source: Google
Next year, a number of high profile elections — including the US Presidential election — are taking place around the world. We have been supporting and protecting elections across our products for years and we remain deeply committed to this work.
In 2024, we will continue our efforts to safeguard our platforms, help people make informed decisions, surface high-quality information to voters, and equip campaigns with best-in-class security. We’ll do this work with an increased focus on the role artificial intelligence (AI) might play. As with any emerging technology, AI presents new opportunities as well as challenges. In particular, our AI models will enhance our abuse fighting efforts, including our ability to enforce our policies at scale. But we are also preparing for how it can change the misinformation landscape. Here’s how we are approaching these fresh challenges:
Safeguarding our platforms from abuse
Over the past several years, we’ve supported numerous elections globally and with each passing election cycle, we continue to apply new learnings to both improve our protections for harmful content and create trustworthy experiences.
To safeguard our platforms, we have long standing policies that inform how we approach areas like manipulated media, hate and harassment, incitement to violence, and demonstrably false claims that could undermine democratic processes. For over a decade, we’ve leveraged machine learning classifiers and AI to identify and remove content that violates these policies. And now, with the recent advancements in our Large Language Models (LLMs), we’re experimenting with building faster and more adaptable enforcement systems. Early results indicate that this will enable us to remain nimble and take action even quicker when new threats emerge.
We’re also focused on taking a principled and responsible approach to introducing generative AI products – including Search Generative Experience (SGE) and Bard – where we’ve prioritized testing for safety risks ranging from cybersecurity vulnerabilities to misinformation and fairness. Beginning early next year, in preparation for the 2024 elections and out of an abundance of caution on such an important topic, we’ll restrict the types of election-related queries for which Bard and SGE will return responses.
Helping people identify AI-generated content
To help people distinguish media that may seem realistic but is actually AI-generated, we’ve introduced several new tools and protections including:
- Ads disclosures: We were the first tech company to require election advertisers to prominently disclose when their ads include realistic synthetic content that’s been digitally altered or generated, including by AI tools.
- Content labels: Over the coming months, YouTube will require creators to disclose when they’ve created realistic altered or synthetic content, and will display a label that indicates for people when the content they’re watching is synthetic.
- Additional context:
- Digital watermarking: SynthID, a tool in beta from Google DeepMind, directly embeds a digital watermark into AI-generated images and audio.
Surfacing high-quality information to voters
During elections, people search for information on candidates, voter registration deadlines, the location of their polling place, and more. Here are some of the ways we make it easy for people to find what they need.
- Search: We’ll continue to work with partners like Democracy Works to surface authoritative information from state and local election offices at the top of Search results when people search for topics like how and where to vote. And as with previous U.S. elections, we’re working with The Associated Press to present authoritative election results on Google.
- News: In 2022, we launched additional News features to help readers discover authoritative local and regional news from different states about elections around the country.
- YouTube: YouTube will work to ensure the right measures are in place to connect people to high-quality election news and information. Read more here.
- Maps: We’ll clearly highlight if a place is a polling location to make them easy to find and navigate to. To prevent bad actors from spamming election-related places on Maps, we’ll apply enhanced protections for contributed content on places like government office buildings.
- Ads: We’ve long required advertisers who wish to run election ads (federal and state) to go through an identity verification process and have an in-ad disclosure that clearly shows who paid for the ad. These ads also appear in our Political Advertising Transparency Report.
Partnering to equip campaigns with best-in-class security
Elections come with increased cybersecurity risks. Our Advanced Protection Program – our strongest set of cyber protections – is available to elected officials, candidates, campaign workers, journalists, election workers and other high-risk individuals. We’re excited to be expanding our longstanding partnership with Defending Digital Campaigns (DDC) to provide campaigns with the security tools they need to stay safe online, including tools to rapidly configure Google Workspace’s security features. In 2023, through partners like DDC, we also distributed 100,000 free Titan Security Keys to high risk users, and next year, we’ve committed to providing an additional 100,000 of our new Titan Security keys. Additionally, to date, our Campaign Security Project has helped train more than 9,000 campaign and election officials across the political spectrum in digital security best practices.
Our Threat Analysis Group (TAG) and Mandiant Intelligence help identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. For example, on any given day, TAG is tracking more than 270 targeted or government-backed attacker groups from more than 50 countries. We publish our respective findings consistently to keep the public and private sector vigilant and well-informed. Mandiant also helps organizations build holistic election security programs and harden their defenses with comprehensive tools, ranging from proactive compromise assessment services to threat intelligence tracking of information operations.