Tech leaders unite to safeguard 2024 elections against deceptive AI usage

tech leaders vow to combat wrongful use of AI

At the Munich Security Conference (MSC) today, major tech firms committed to combating deceptive AI content ahead of this year’s global elections, where more than 40 countries with a combined electorate exceeding four billion will participate.

The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” outlines a series of actions to counter misleading AI-generated content aimed at manipulating voters. Participating companies have agreed to join forces in developing technologies to detect and counter the spread of such content online, as well as to launch educational initiatives and enhance transparency efforts. 

The accord also emphasises principles such as the need to trace the source of deceptive election-related content and raise public awareness about the issue. This effort represents a crucial move to protect online communities from harmful AI-generated content and builds upon the ongoing efforts of individual companies.

The deceptive AI content addressed by the accord includes audio, video, and images that inaccurately portray or manipulate the appearance, voice, or behavior of political candidates, election officials, and other key figures involved in democratic elections. It also encompasses false information intended to mislead voters about voting procedures, locations, and timing.

“Elections are the beating heart of democracies. The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices,” said Ambassador Dr. Christoph Heusgen, Munich Security Conference Chairman. “MSC is proud to offer a platform for technology companies to take steps toward reining in threats emanating from AI while employing it for democratic good at the same time.” 

Among the signatories to the accord are prominent technology companies such as Adobe, Amazon, Google, IBM, Meta, Microsoft, and TikTok, along with others including Anthropic, ElevenLabs, Inflection AI, McAfee, Snap Inc., Trend Micro, and Truepic, demonstrating a collective commitment to safeguarding the integrity of democratic processes against the misuse of AI technology.

These commitments apply where they are relevant for the services each company provides.  

“Transparency builds trust,” said Dana Rao, General Counsel and Chief Trust Officer at Adobe. “That’s why we’re excited to see this effort to build the infrastructure we need to provide context for the content consumers are seeing online. With elections happening around the world this year, we need to invest in media literacy campaigns to ensure people know they can’t trust everything they see and hear online, and that there are tools out there to help them understand what’s true.”   

“This is a pivotal election year for more than 4 billion voters globally and security and trust are essential to the success of elections and campaigns around the world,” said David Zapolsky, Senior Vice President of Global Public Policy and General Counsel at Amazon. “Amazon is committed to upholding democracy and the Munich Accord complements our existing efforts to build and deploy new AI technologies that are reliable, secure, and safe. We believe this accord is an important part of our collective work to advance safeguards against deceptive activity and protect the integrity of elections.” 

“Google has been supporting election integrity for years, and today’s accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust,” said Kent Walker, President, Global Affairs at Google. “We can’t let digital abuse threaten AI’s generational opportunity to improve our economies, create new jobs, and drive progress in health and science.” 

“Democracy rests on safe and secure elections, says Kent Walker, President, Global Affairs at Google.

“Disinformation campaigns are not new, but in this exceptional year of elections – with more than 4 billion people heading to the polls worldwide – concrete, cooperative measures are needed to protect people and societies from the amplified risks of AI-generated deceptive content,” said Christina Montgomery, Vice President and Chief Privacy & Trust Officer, IBM. “That’s why IBM today reaffirmed our commitment to ensuring safe, trustworthy, and ethical AI.” 

“With so many major elections taking place this year, it’s vital we do what we can to prevent people being deceived by AI-generated content,” said Nick Clegg, President, Global Affairs at Meta. “This work is bigger than any one company and will require a huge effort across industry, government and civil society. Hopefully, this accord can serve as a meaningful step from industry in meeting that challenge.”  

“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponised in elections. AI didn’t create election deception, but we must ensure it doesn’t help deception flourish, said Brad Smith, Vice Chair and President of Microsoft.

“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” said Anna Makanju, Vice President of Global Affairs at OpenAI. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.”  

“We believe that integrity in elections and information is critical to supporting democratic processes and institutions,” said Jennifer Stout, VP of Global Public Policy at Snap Inc. “We are delighted to sign this Accord which builds on Snap’s longstanding commitment to advancing transparency and combating the spread of harmful deceptive content, whether it is user or AI-generated.” 

“It’s crucial for industry to work together to safeguard communities against misleading and deceptive AI in this historic election year.” said Theo Bertram, VP, Global Public Policy (Europe), TikTok. “This builds on our continued investment in protecting election integrity and advancing responsible and transparent AI-generated content practices through robust rules, new technologies, and media literacy partnerships with experts.” 

Linda Yaccarino, CEO of X said, “In democratic processes around the world, every citizen and company has a responsibility to safeguard free and fair elections, that’s why we must understand the risks AI content could have on the process. X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximising transparency.” 

 

Share
Pearl Dy
Pearl Dy
Pearl is a marketing and content specialist based in Australia. She is passionate about business and development communciations.