Twenty tech companies working on artificial intelligence said Friday they had signed a “pledge” to try to prevent their software from interfering in elections, including in the United States.
The signatories range from tech giants such as Microsoft and Google to a small startup that allows people to make fake voices — the kind of generative-AI product that could be abused in an election to create convincing deepfakes of a candidate.
The accord is, in effect, a recognition that the companies’ own products create a lot of risk in a year in which 4 billion people around the world are expected to vote in elections.
“Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes,” the document reads.
The accord is also a recognition that lawmakers around the world haven’t responded very quickly to the swift advancements in generative AI, leaving the tech industry to explore self-regulation.
“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” Brad Smith, vice chair and president of Microsoft, said in a statement.
The 20 companies to sign the pledge are: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X.
The accord is voluntary and falls short of an outright ban on AI content in elections, as some people have called for. At 1,500 words, the document lists eight steps that the companies say they will take this year. The steps include developing new tools to distinguish AI-generated images from authentic content and being transparent with the public about notable developments.
Free Press, an advocacy group that supports an open internet, said the pledge amounted to an empty promise because tech companies have been backsliding on previous commitments to election integrity they made after the 2020 election. The group called for more oversight by human reviewers.
“Voluntary promises like the one announced today simply aren’t good enough to meet the global challenges facing democracy,” Free Press senior counsel Nora Benavidez said in a statement.
“Every election cycle, tech companies pledge to a vague set of democratic standards and then fail to fully deliver on these promises,” she said.
Rep. Yvette Clarke, D-N.Y., said she welcomed the tech accord and wants to see Congress build on it. Clarke has sponsored legislation to regulate deepfakes and AI-generated content in political ads.
“This could be a defining moment for this Congress, and this may be the one unifying issue where we can band together to protect this nation and future generations of Americans to come,” she said in a statement.
This year has been called the biggest year for democracy in history, with elections taking place in seven of the world’s 10 most populous countries. In addition to the U.S. election in November, there are upcoming nationwide votes in India, Russia and Mexico. There have already been elections this year in Indonesia, Pakistan and Bangladesh.
A fake robocall claiming to be from President Joe Biden ahead of January’s New Hampshire primary has caused widespread alarm about the potential for fake voices, images and video in politics. The Federal Communications Commission voted this month to outlaw robocalls that contain AI-generated voices.
Individual tech companies have put in place their own measures. Meta, which owns Facebook and Instagram, said it would try to label AI-made images, although it warned last month that it didn’t have the technical ability to do the same with audio and video generated by AI.
Nick Clegg, president for global affairs at Meta, said the pledge is intended as a “meaningful step from industry” toward cracking down on deceptive content. But he said the work also requires effort by governments and civil society.
“With so many major elections taking place this year, it’s vital we do what we can to prevent people being deceived by AI-generated content,” he said in a statement.
The companies announced their accord at the Munich Security Conference, an annual event for world leaders to discuss an array of challenges. Attendants at the conference this weekend include Vice President Kamala Harris and Israeli President Isaac Herzog.
The topic of generative AI dominated public and private discussions at the World Economic Forum in Davos, Switzerland, in January.
© OfficialAffairs