Elon Musk’s AI chatbot Grok has circulated false information about the recent mass shooting at Bondi Beach, Australia, misidentifying a key figure who saved lives and claiming, without basis, that a victim staged his injuries, researchers said Tuesday.
Among Grok’s false claims was the repeated misidentification of Ahmed al Ahmed, widely hailed as a hero for wrestling a gun from one of the attackers.
In one instance reviewed by AFP, Grok described a verified clip of Ahmed’s confrontation as “an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it,” implying it “may be staged.”
Citing other media sources, Grok also misidentified an image of Ahmed as an Israeli hostage held by Hamas for over 700 days.
When asked about another scene from the attack, the chatbot incorrectly claimed it was footage from Tropical Cyclone Alfred, which hit the Australian coast earlier this year. Only after being prompted by another user did Grok acknowledge that the footage was indeed from the Bondi Beach attack.
When contacted by AFP, xAI, Grok’s developer, responded only with an automated message: “Legacy Media Lies.”
The misinformation strikingly exposed the unreliability of AI chatbots as real-time fact-checking tools, particularly at a time when users are increasingly turning to these tools to verify images.
The Australia attack occurred on Sunday during a Jewish festival in the Sydney beach suburb, leaving 15 people dead and dozens wounded.
Following the attack, online users circulated an authentic photo of a survivor, falsely claiming he was a “crisis actor,” NewsGuard reported.
The term is used by conspiracy theorists to allege that victims are faking injuries or death. Grok further labeled the image as “staged” or “fake,” reinforcing the disinformation.
NewsGuard also noted that some users created an AI-generated image using Google’s Nano Banana Pro, depicting red paint being applied to the survivor’s face to simulate blood, seemingly to support the false claim.
Researchers acknowledge that AI tools can assist professional fact-checkers by quickly geolocating images or spotting visual clues.
However, they stress that AI cannot replace trained human verification, especially in polarized societies where fact-checkers often face accusations of bias.
AFP currently participates in Meta’s fact-checking program in 26 languages across Asia, Latin America, and the European Union.

