US Attorneys General Call on X to Address Sexualized Deep Fakes

US Attorneys General Call on X to Address Sexualized Deep Fakes


Looks like more legal troubles coming up for X, with various U.S. states taking regulatory action against the Elon Musk-owned app over the generation and dissemination of sexualized images.

This stems from X’s Grok chatbot stripping down photos of anyone, from well-known actors to random children, via its image generation capability, which had become a trend on X early in the New Year. Indeed, at one stage, data indicates that Grok was generating over 6,000 sexualized images per day, all of which were publicly accessible in the app.

That prompted a major backlash in several regions, and even bans on both Grok and X in some areas. Though X initially stood firm in the face of criticism, with Musk himself claiming that the criticism was not about X, but more about broader censorship, and an effort to stop X from revealing broader truths.

But, really, there’s no way to justify the generation of nude and sexualized images, and no need for this functionality to exist, regardless of any other politically charged messaging you want to attach to such. And as a result, amid the threat of further bans and restrictions, X eventually backed down and restricted Grok’s image generation capability to paying users only, while also implementing measures to stop the generation of these types of images.

But that may have come too late. Yesterday, the EU Commission announced an investigation into Grok, and xAI’s safeguards to protect against misuse of its tools. 

And now, a group of more 37 U.S. attorneys general are also looking to take action against xAI.

As reported by Wired:

On Friday, a bipartisan group of attorneys general published an open letter to xAI demanding it ‘immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are the overwhelming target of [non-consensual intimate images].’”

In the letter, the group raises serious concerns about “artificial intelligence produced deepfake non-consensual intimate images (NCII) of real people, including children.”

And while X has now taken action, the group is calling for more responsibility from Musk and his team.

“We recognize that xAI has implemented measures intended to prevent Grok from creating NCII and appreciate your recent meeting with several undersigned attorneys general to discuss these efforts […] Further, you claim to have implemented technical measures to prevent the @Grok account ‘from allowing the editing of images of real people in revealing clothing such as bikinis.’ But we are concerned that these efforts may not have completely solved the issues.”

Indeed, the Attorneys General further suggest that X’s AI tools were actually designed for this purpose, and have built in tools to facilitate harmful usage.

“Grok was not only enabling these harms at an enormous scale but seemed to be actually encouraging this behavior by design. xAI purposefully developed its text models to engage in explicit exchanges and designed image models to include a ‘spicy mode’ that generated explicit content, resulting in content that sexualizes people without their consent.”

As a result, the group is calling for Elon and X to take more definitive measures to outlaw such use, including removal of all avenues to generating such images, removing all such content that’s been created already, and suspending users who misuse Grok for such purpose.

The Attorneys General also want X to give users control over whether their content can be edited by Grok, “including at a minimum the ability to easily prohibit the @Grok account from responding to their posts or editing their images when prompted by another user.”

Which means more challenges for X, in improving transparency, as well as expanded efforts to implement safeguards and restrictions on Grok use.

Which, again, Elon Musk is not a fan of, and it may require a bigger legal fight to make this happen, which Musk will no doubt also use as an opportunity to present himself as the face of free speech, as government regulators look to crack down.

Elon’s main refrain in this instance has been that other apps facilitate the same options, and that regulators aren’t going after other nudification and AI generation apps with the same vigor.

But the Attorneys General also address this:

“While other companies are also responsible for allowing NCII creation, xAI’s size and market share make it a market leader in artificial intelligence. Unique among the major AI labs, you are connecting these tools directly to a social media platform with hundreds of millions of users. So your actions are of utmost importance. The steps you take to prevent and remove NCII will establish industry benchmarks to protect adults and children against harmful deepfake non-consensual intimate images.”

It’s interesting to consider this push in light of Elon’s own very public, very loud stance against CSAM material, with Musk announcing, shortly after taking over Twitter, that combating CSAM was “Prority #1” in his time at the app.

Musk had criticized Twitter’s former leadership for failing to address child sexual exploitation in the app, and he’s since claimed several major advances in address such on X.

Yet, in this instance, Musk wants to fight back, which seems to run counter to these claims.

I mean, clearly, the broader political angling around CSAM content has changed, given that it was once the primary focus of right wing voters, many of whom would now prefer to overlook the Epstein files.

Maybe that’s altered Elon’s own position on the same, though it seems that, on the face of it, this should be a major concern for this group.

Either way, X is now set to come under more scrutiny, in more regions, and with impacts potentially stemming to xAI, and Musk’s broader AI projects, this could have a big impact on his plans.

We’ll see how Musk responds, and whether further action will be sought on this front.  



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *