AI-generated GTA 6 “leak” video viewed by millions that remains live

Read Time:3 Minute, 9 Second

GTA 6 ALERT – EXTREMELY SERIOUS SITUATION,” reads the X / Twitter post from the Zap Actu GTA6 account. The supposed gameplay clip of Rockstar’s upcoming surefire blockbuster is brief, but, if true, a shocking leak indeed.

Of course, it’s not true. It’s not a leak. It’s not even real gameplay. It’s yet another AI-generated GTA 6 “leak” video viewed by millions that remains live across social media, which seems unable to do anything about it beyond the odd community note.

The now-deleted tweet, published yesterday, November 25, has gone viral, securing 8 million views in just over 24 hours. Below it, a community note warned against its authenticity, but that appeared to do little to dull its impact. And it is far from the only one. The same Twitter account responsible for this “leak” had published a number of similar clips in recent months, some of which which have a huge number of views, all in a desperate bid for followers and Discord members.

Based on the replies, many Twitter users are taking these leaks at face value. It’s a problem GTA 6 has faced for some time now, which probably comes as little surprise GTA 5 Apk For Android Free Download No Survey given the intense excitement and thirst for new information on what is expected to be the biggest entertainment launch of all time. But it is far from the only video game to suffer from this problem. Indeed, video games are not alone in this, either.

Last month, IGN reported on physicist Brian Cox, who went public with complaints about YouTube accounts that had used AI to create deepfakes of him saying “nonsense” about comet 3I/ATLAS. Similarly, Keanu Reeves recently hit out at AI deepfakes of the John Wick star selling products without his permission, insisting “it’s not a lot of fun.” In July, it was reported that Reeves pays a company a few thousand dollars a month to get the likes of TikTok and Meta to take down imitators.

In 2023, Tom Hanks warned fans that an AI version of his likeness was being used without his consent in an online advert for a dental plan. Last year, Morgan Freeman thanked fans who alerted him to AI-generated imitations of his voice online after a series of videos created by someone posing as his niece went viral. And in May this year, Jamie Lee Curtis was forced to appeal to Meta CEO Mark Zuckerberg in an Instagram post because she couldn’t get the company to pull an AI-generated ad that featured her likeness for “some bullshit that I didn’t authorize, agree to or endorse.”

What is the solution here? In July, YouTube was said to be preparing to update its policies to crack down on creators’ ability to generate revenue from “inauthentic” content, which is made easy to produce on a massive scale with generative AI. While propelled forward by the great YouTube algorithm, you’ve probably stumbled across a fake trailer or two. The hope was that YouTube would be able to crack down on the channels that pump out this sort of low-effort content, but a cursory glance at YouTube shows this has yet to happen.

Without legislation forcing content built by generative AI tools to include labels clearly marking it as such, or laws preventing deepfakes without permission, fans will continue to be misled by bad actors. And as generative AI technology improves, so it will become harder to distinguish between the fake and the real.

Can anything meaningful be done? Last month, the Japanese government made a formal request asking OpenAI to refrain from copyright infringement after Sora 2 users generated videos featuring the likenesses of copyrighted characters from anime and video games.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %
Previous post Why GTA 6 Cancellation Rumors Are Spiking Right Now