WASHINGTON (AP) — The Democratic ϲʹ Committee was watching earlier this year as campaigns nationwide were experimenting with artificial intelligence. So the organization approached a handful of influential party campaign committees with a request: Sign onto guidelines that would commit them to use the technology in a “responsible” way.
The draft agreement, a copy of which was obtained by The Associated Press, was hardly full of revolutionary ideas. It asked campaigns to check work by AI tools, protect against biases and avoid using AI .
“Our goal is to use this new technology both effectively and ethically, and in a way that advances – rather than undermines – the values that we espouse in our campaigns,” the draft said.
The plan went nowhere.
Instead of fostering an agreement, the guidelines sparked a debate about the value of such pledges, particularly those governing fast-evolving technology. Among the concerns expressed by the Democratic campaign organizations: Such a pledge might hamstring their ability to deploy AI and could turn off donors with ties to the AI industry. Some committee officials were also irked that the DNC gave them only a few days to agree to the guidelines.
The proposal’s demise highlighted internal divisions over campaign tactics and the party’s amid warnings from experts that the technology is supercharging the proliferation of disinformation.
Hannah Muldavin, a senior spokesperson at the Democratic ϲʹ Committee, said the group is not giving up on finding a consensus.
The DNC, she said, “will continue to engage with our sister committees to discuss ideas and issues important to Democratic campaigns and to American voters, including AI.”
“It’s not uncommon for ideas and plans to shift, especially in the midst of a busy election year, and any documents on this subject reflect early and ongoing conversations,” Muldavin said, adding the “DNC and our partners take seriously the opportunities and challenges presented by AI.”
The wrangling comes as campaigns have increasingly deployed artificial intelligence — computer systems, software or processes that emulate aspects of human work and cognition — to optimize workloads. That includes using large language models to write fundraising emails, text supporters and build chatbots to answer voters’ questions.
That trend is expected to continue as November’s general election approaches, with campaigns turning to supercharged generative AI tools to create text and images, as well as clone human voices and create video at lightning speeds.
The Republican ϲʹ Committee in a television spot last year predicting a dystopian future under President Joe Biden.
Much of that adoption, however, has been overshadowed by concerns about how campaigns could use artificial intelligence in ways that trick voters. Experts have warned that AI has become so powerful that it has made it easy to generate “deep fake” videos, audio snippets and other media targeting opposing candidates. Some states have regulating the way generative artificial intelligence can be used. But Congress has so far failed to pass any bills regulating artificial intelligence on the federal level.
In the absence of regulation, the DNC sought a set of guidelines it could point to as evidence the party was taking seriously the threat and promise of AI. It sent the proposal in March to the five Democratic campaign committees that seek to elect House, Senate, gubernatorial, state legislative and state attorneys general candidates to office, according to the draft agreement.
The goal was to have each committee agree to a slate of AI guardrails and the DNC proposed issuing a joint statement proclaiming such guidelines would ensure that campaigns could use "the tools they need to prevent the spread of misinformation and disinformation, while empowering campaigns to safely, responsibly use generative AI to engage more Americans in our democracy.”
The Democratic committee had hoped the statement would be signed by Chair Jaime Harrison and the leaders of the other organizations.
Democratic operatives said the proposal landed with a thud. Some senior leaders at the committees worried that the agreement might have unforeseen consequences, perhaps constricting how campaigns use AI, according to multiple Democratic operatives familiar with the outreach.
And it might send the wrong message to technology companies and executives who work on AI, many of whom help fill campaign coffers during election years.
Some of the Democratic Party’s most prolific donors are top tech entrepreneurs and AI evangelists, including Sam Altman, the CEO of OpenAI, and Eric Schmidt, the former CEO of Google.
Altman has donated over $200,000 to the Biden campaign and his aligned Democratic joint fundraising committee since the start of last year, according to data from the Federal Election Commission, and Schmidt’s contributions to those groups have topped $500,000 over the same time.
Two other AI proponents, Dustin Moskovitz, the co-founder of Facebook, and Reid Hoffman, the co-founder of LinkedIn, donated more than $900,000 to Biden’s joint fundraising committee this cycle, according to the same data.
The DNC plan caught the committees off guard because it came with little explanation, other than a desire to get each committee to agree to the list of best practices within a few days, said multiple Democratic operatives who spoke on condition of anonymity because they weren't authorized to discuss the matter. Aides to the Democratic Congressional Campaign and Democratic Senatorial Campaign committees said they felt rushed by a DNC timeline that urged them to sign quickly.
Representatives from the Democratic Attorneys General Association did not respond to the Associated Press’ request for comment. Spokesmen from the Democratic Governors Association and Democratic Legislative Campaign Committee declined to comment.
The Republican ϲʹ Committee did not respond to questions about its AI guidelines. The Biden campaign also declined to comment when asked about the DNC effort.
The four-page agreement — “Guidelines on Responsible Use of Generative AI in Campaigns” — covered everything from ensuring that artificial intelligence systems were not trusted without a human checking its work to notifying voters when they are interacting with AI-generated content or systems.
“As the explosive rise of generative AI transforms every corner of public life – including political campaigns – it’s more important than ever that we limit this new technology’s potential threat to voters’ rights, and instead leverage it to build innovative, efficient campaigns and a stronger, more inclusive democracy," the proposal said.
The guidelines were divided into five sections that included titles such as “Offering Human Alternatives, Consideration and Fallback” and “Providing Notice and Explanation.” The proposed rules would have required the committees to ensure “a real person should be responsible for approving AI-generated content and be accountable for how, where, and to whom it is deployed.”
The directive outlined how “users should always be aware when they are interacting with an AI bot” and stressed that any images or video created by AI “should be flagged” as such. And it stressed that campaigns should use AI to assist staffers, not replace them.
“Campaigns are a human-driven and human motivated business,” read the agreement. “Use efficiency gains to teach more voters and focus more on quality control and sustainability.”
It also urged campaigns not to use “generative AI to create misleading content. Period.”
___
This story is part of an Associated Press series, “The AI Campaign,” exploring the influence of artificial intelligence in the 2024 election cycle.
___
The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s for working with philanthropies, a list of supporters and funded coverage areas at
____
The Associated Press and OpenAI have a allowing OpenAI access to part of the AP’s text archives.