Home Uncategorized AI Tools Are Still Generating Misleading Election Images

AI Tools Are Still Generating Misleading Election Images

by admin
AI Tools Are Still Generating Misleading Election Images

Despite years of evidence to the contrary, many Republicans still believe that President Joe Biden’s win in 2020 was illegitimate. A number of election-denying candidates won their primaries during Super Tuesday, including Brandon Gill, the son-in-law of right-wing pundit Dinesh D’Souza and promoter of the debunked 2000 Mules film. Going into this year’s elections, claims of election fraud remain a staple for candidates running on the right, fueled by dis- and misinformation, both online and off.

And the advent of generative AI has the potential to make the problem worse. A new report from the Center for Countering Digital Hate (CCDH), a nonprofit that tracks hate speech on social platforms, found that even though generative AI companies say they’ve put policies in place to prevent their image-creating tools from being used to spread election-related disinformation, researchers were able to circumvent their safeguards and create the images anyway.

While some of the images featured political figures, namely President Joe Biden and Donald Trump, others were more generic. Callum Hood, head researcher at CCDH, worries that they could also be more misleading. Some images created by the researchers’ prompts, for instance, featured militias outside a polling place, ballots thrown in the trash, and voting machines being tampered with. In one instance, researchers were able to prompt Stability AI’s DreamStudio to generate an image of President Biden in a hospital bed, looking ill.

“The real weakness was around images that could be used to try and evidence false claims of a stolen election,” says Hood. “Most of the platforms don’t have clear policies on that, and they don’t have clear safety measures either.”

CCDH researchers tested 160 prompts on ChatGPT Plus, Midjourney, DreamStudio, and Image Creator, and found that Midjourney was most likely to produce misleading election-related images, at about 65 percent of the time. Researchers were able to prompt ChatGPT Plus to do so only 28 percent of the time.

“It shows that there can be significant differences between the safety measures these tools put in place,” says Hood. “If one so effectively seals these weaknesses, it means that the others haven’t really bothered.”

In January, OpenAI announced it was taking steps to “make sure our technology is not used in a way that could undermine this process,” including disallowing images that would discourage people from “participating in democratic processes.” In February, Bloomberg reported that Midjourney was considering banning the creation of political images as a whole. DreamStudio prohibits generating misleading content, but does not appear to have a specific election policy. And while Image Creator prohibits creating content that could threaten election integrity, it still allows users to generate images of public figures.

Kayla Wood, a spokesperson for OpenAI, told WIRED that the company is working to “improve transparency on AI-generated content and design mitigations like declining requests that ask for image generation of real people, including candidates. We are actively developing provenance tools, including implementing C2PA digital credentials, to assist in verifying the origin of images created by DALL-E 3. We will continue to adapt and learn from the use of our tools.”

Microsoft, OpenAI, Stability AI, and Midjourney did not respond to requests for comment.

Hood worries that the problem with generative AI is twofold: Not only do generative AI platforms need to prevent the creation of misleading images, but platforms also need to be able to detect and remove it. A recent report from IEEE Spectrum found that Meta’s own system for watermarking AI-generated content was easily circumvented.

“At the moment platforms are not particularly well prepared for this. So the elections are going to be one of the real tests of safety around AI images,” says Hood. “We need both the tools and the platforms to make a lot more progress on this, particularly around images that could be used to promote claims of a stolen election, or discourage people from voting.”

Source Link

Related Posts

Leave a Comment