Gardaí investigating 200 reports of Grok child abuse material
By Cillian Sherlock, Press Association
There are hundreds of open investigations into content shared on X, a senior Garda has said, amid concerns over potential child sexual abuse material (CSAM) generated on the platform using the artificial intelligence tool Grok.
Politicians convened on Wednesday for an Oireachtas Media Committee hearing with gardaí and other experts, which largely dealt with growing concerns over the proliferation of CSAM and other AI-generated sexualised material on the social network.
They were told there are currently 200 reports being investigated by An Garda Síochána in relation to the platform that are potentially indicative of containing CSAM.
Detective Chief Superintendent Barry Walsh said: “We have received reports and referrals of content on that particular platform (X) that is under investigation.
“The investigation process takes some time, the content has to be assessed to make sure it’s criminal, and thereafter the people responsible have to be identified, if that’s possible, and the investigative action stems from there.
“So what follows is the investigative process, and that may result in various different actions, such as execution of warrants, interview people responsible, interview being brought before the court or for direction from the Director of Public Prosecutions.”
He added: “As of this morning, there are 200 reports that are being investigated involving content that is child sexual abuse material, or child sexual abuse indicative material.”
The senior officer said these all related to Grok.
Mr Walsh said gardaí remain in contact with the media regulator Coimisiún na Meán about AI-generated CSAM.
He said gardaí believe existing legislation allows it to carry out investigations into the material and that they had not yet encountered anything that would prevent it from carrying out a probe.
Mr Walsh said he wanted to reassure the public that reports are being “treated with utmost seriousness” and thoroughly investigated.

In his written submission, he said: “I would encourage any individual who may be a victim of these crimes to make contact with your local Garda station where you will be provided with access to the wide range of specialist help and support that is available.
“Victims of intimate image abuse also have the option of reporting online via Hotline.ie.”
Mr Walsh said recent commentary had focused “on one AI model in particular” but the reality is that it was “a conceptual possibility” that other AI models could be trained to create such content.
He called for a “robust response” from AI service providers to ensure that their models cannot be manipulated to create content that is “both unlawful and hugely harmful to those individuals who are impacted”.
Mr Walsh said a minimum step is for online service providers to make sure material disseminated on their platforms is appropriate for recipient audiences and that it has been effectively considered for accuracy – but it was clear this was not currently the case.
The officer, who is attached to the Garda National Cyber Crime Bureau, said there are ever-increasing levels of CSAM being produced and distributed online.
Mr Walsh said gardaí do proactive work to find CSAM online but mainly deal with referrals through the US private agency the National Centre for Missing & Exploited Children.
He said referrals had been increasing year-on-year, with 13,300 in 2024 and roughly 25,000 in 2025.

Addressing the witnesses to the committee, Fianna Fáil senator Alison Comyn recalled an occasion when someone placed photographs of her face on pornographic images and sent it to the BBC where she had been working as a news anchor.
She said she found this “deeply upsetting and violating”, noting that police at the time considered it to be “quite funny”.
She said: “That’s another issue – but at least it wasn’t seen by millions.
“Now we have a legitimate business – a high profile platform – who can legally offer this service, and it is disseminated to millions globally, and we’re looking at it being created in seconds and sent out at the touch of a button.
“We know that it’s children and vulnerable people who are at risk here. We have to stop that in its tracks.”
Ms Comyn asked if gardaí needed additional legislative measures to assist in their investigations.
Mr Walsh said recently developed legislation has been effective in prosecuting this type of offending, adding that legislation on EU criminal justice cooperation would help with “complications” around accessing evidence around CSAM reports.
Malcolm Byrne TD asked what recourse adults have against AI-generated sexualised images bearing their likeness.
Detective Superintendent Michael Mullen said there are protections under “Coco’s Law” which deals with offences relating to the recording, distribution and publication of intimate images.
He said: “It makes no difference. If it’s AI-generated, under Coco’s law it is still a criminal offence – as simple as that.
“It would be investigated as if it were a real image. So AI is treated exactly the same as a real image.”
Under further questioning from Padraig O’Sullivan TD, Mr Walsh said there had been “just under” 900 contacts with gardai last year in relation to Coco’s law.
He also told the TD that increased investment has allowed the unit to cut its excess case load down from “hundreds” to “around 50 or 60” cases which have not been actioned.
The Irish Council for Civil Liberties (ICCL) and Digital Rights Ireland (DRI) published: 'Who has what powers when it comes to illegal content online?' on Wednesday.
The ICCL and DRI analysed the investigatory powers of An Garda Síochána, Coimisiún na Meán, the EU Commission, and the Data Protection Commission.
The analysis concerns two types of illegal online content: 'child pornography’ as per the Child Trafficking and Pornography Act 1998; and non-consensual deepfake sexualised images of people, as per the Harassment, Harmful Communications and Related Offences Act 2020.
