The majority of such cases seen by the Internet Watch Foundation involve manipulation of existing child sexual abuse material (CSAM) or adult pornography, with a child’s face transplanted on to the footage. A handful of examples involve entirely AI-made videos lasting about 20 seconds, the IWF said.
The organisation, which monitors CSAM around the world, said it was concerned that more AI-made CSAM videos could emerge as the tools behind them become more widespread and easier to use.
Dan Sexton, chief technology officer at the IWF, said if the use of AI video tools followed the same trend as AI-made still images, which have increased in volume as the technology has improved and become more widely available, more CSAM videos could emerge.
“I would tentatively say that if it follows the same trends, then we will see more videos,” he said, adding that future videos could also be of “higher quality and realism”.
The IWF added that AI-made CSAM images have become more photo-realistic this year compared with 2023, when it first started seeing such content.
Its snapshot study this year of a single dark web forum – which anonymises users and shields them from tracking – found 12,000 new AI-generated images posted over a month-long period. Nine out of 10 of those images were so realistic they could be prosecuted under the same UK laws covering real CSAM, the IWF said.
The IWF’s chief executive, Susie Hargreaves, said: “Without proper controls, generative AI tools provide a playground for online predators to realise their most perverse and sickening fantasies. Even now, the IWF is starting to see more of this type of material being shared and sold on commercial child sexual abuse websites on the internet.”
The IWF is pushing for law changes that will criminalise making guides to generate AI-made CSAM as well as making “fine-tuned” AI models that can produce such material.
Source: The Guardian