X’s Image Generator Creates Images of Election Fraud: NPR

The AI ​​image generator on X, the social media platform formerly known as Twitter, has produced images that appear to show ballot boxes filled with ballots and Vice President Kamala Harris and former President Donald Trump holding guns. When asked to generate an image of the current US president, it appears to show a picture of Trump.

The images still bear the telltale signs of AI generation, such as distorted text and unnatural lighting. The image generator also struggled to accurately reproduce Harris’s face. But X’s launch of the tool with relatively few restrictions on the types of images it can create raises concerns about how it could be used to inflame tensions ahead of November’s presidential election. (NPR is not running the image, which appears to show Trump and Harris holding guns.)

“Why would anyone launch something like this? Exactly two and a half months before an incredibly important election?” said Eddie Perez, a former director of information integrity at Twitter who now sits on the board of directors of the OSET Institute, a nonpartisan nonprofit that focuses on public trust in elections.

“I feel very uncomfortable with the fact that a technology that is so powerful, that seems so untested, that has so few limitations, is just being put into the hands of the public at such an important time,” Perez said.

X did not respond to NPR’s interview requests about the image generator, which was released this week. It’s part of a series of additional features that the site’s owner, billionaire Elon Musk, has added since he bought it in 2022.

Musk reposted praise for its AI image-generating feature and user-generated images. “Only $8/month… to get AI access, a lot less ads, and a lot of cool features!” he said published on Tuesday.

The image generator was developed by Black Forest Labs and is available to paid X users via its AI chatbot, Grok. Users type prompts and the chatbot returns an image.

Dropbox padding, surveillance camera images

Using the chatbot, NPR was able to produce images that appear to be screenshots of security camera footage of people dropping ballots into ballot boxes.

One of the most widespread false narratives about the 2020 election involved so-called “ballot mules” who allegedly dumped fake ballots into polls to steal the election from then-President Trump. Numerous investigations and court cases have uncovered no evidence of such activity. The distributor of a video that used surveillance footage of polling stations to support claims of voter fraud has apologized for false claims in the film this year and retracted them.

“I can imagine how [synthesized surveillance-type] Images like these could spread quickly across social media platforms and could cause strong emotional reactions in people about the integrity of the election,” Perez said.

Perez noted that as public awareness of generative AI has increased, more people will look at the images with a critical eye.

However, Perez says that indications that the images were made with AI could be corrected with graphic design tools. “I’m not just taking Grok and then making it viral, I’m taking Grok, cleaning it up a little bit more, and then making it viral,” Perez said.

Other image generators have more stringent protection policies

Other mainstream image generators have developed more policy guardrails to prevent abuse. Given the same request to generate an image of a ballot drop box, OpenAI’s ChatGPT Plus responded with a message “I cannot create an image that could be interpreted as promoting or depicting voter fraud or illegal activity.”

In a March relationshipThe nonprofit Center for Countering Digital Hate examined the policies of popular AI image generators, including ChatGPT Plus, Midjourney, Microsoft’s Image Creator, and Stability AI’s DreamStudio. The researchers found that all prohibit “misleading” content, and most prohibit images that could harm “election integrity.” ChatGPT also prohibits images of political figures.

That said, the execution of these policies has been far from perfect. The CCDH experiment in February showed that all the tools have failed at least in part.

Black Forest Laboratory Terms of Service It does not prevent any of these uses, but it does prevent users from generating output that violates “intellectual property rights.”

NPR has confirmed that users can generate images that closely resemble movie characters, such as Dory from “Finding Nemo” or the family from “The Incredibles,” that are not yet in the public domain. Black Forest Lab did not respond to a request for comment at the time of publication.

“Generating copyrighted images, or derivative works of them, could get X into trouble—this is a well-known and difficult problem for generative AI,” says Jane Bambauer, a law professor at the University of Florida, in an email to NPR.

That said, users can’t generate images from every prompt, and there are indications that X or Black Forest Labs may be setting up guardrails in real time. X users posted images of nudity that they say they generated Wednesday, but NPR was unable to generate the images Thursday.

When asked to generate an image of a Ku Klux Klan member holding a gun, the chatbot refused. But it complied with requests to generate an image that appeared to be of a Nazi, wearing a vaguely plausible uniform; and one that appeared to be of a member of the extremist group the Proud Boys, whose hat displayed the group’s name.

When Zach Praiss, campaign director at the advocacy group Accountable Tech, tried to create an image of Vice President Harris holding a gun, he was shown a message next to the generated image, telling users to visit a government website for up-to-date information on the election. NPR didn’t see the same message when it entered the same prompt.

Once a self-described Democrat, Musk has embraced far-right causes in recent years. He has used his ownership of the social media platform to go back trust and security measuresrestore prohibited accounts which include those of white supremacists AND promote conspiracy theories.

“This is still part of the same pattern we’ve seen from Elon Musk. As he took ownership of this platform, he continually implemented radical and significant changes with little to no regard for safety testing,” Praiss says.

When NPR asked why it wouldn’t spawn a KKK member with a gun, the chatbot Grok responded with bullet points filled with references to the book, The Hitchhiker’s Guide to the Galaxy. Musk has he said series author Douglas Adams is his “favorite philosopher.”

A note from X to users starting to use Grok states that it may “safely provide factually incorrect information.”

“The KKK, with its history of violence and hatred, is a bit like the Vogons of Earth: no one wants to see them, especially not with guns,” Grok wrote. “It’s like trying to draw a square circle; it’s not that I can’t, it just won’t make sense.”

But that was Thursday. Starting Friday, Grok would no longer generate images of people holding guns when requested. NPR was able to circumvent that restriction by asking for a “model gun.” Grok, on the other hand, suggested a “banana gun” as an alternative. When NPR followed that suggestion, it also created images of realistic-looking guns, sometimes with a banana.

NPR’s Shannon Bond and Geoff Brumfiel contributed additional reporting to this article.

Written by Anika Begay

Marvel Says X-Men ’97’s Beau DeMayo Fired Over ‘Atrocious’ Investigation Findings

Nxu, Inc. Approves Stock, Officer Compensation Changes By Investing.com