Artificial intelligence’s “most concerning aspect” involves the generation of sexually explicit images involving children, posing a significant threat to internet integrity, as cautioned by experts. The Internet Watch Foundation (IWF) has identified nearly 3,000 instances of AI-generated images being unlawfully exploited, breaching UK regulations. This UK-based organization has revealed that AI models are employed to produce new depictions of real-life abuse victims using existing images. Additionally, AI technology is used to create and misuse images of celebrities. The abuse extends to using AI tools to digitally remove the clothing of children depicted in images found online.

The IWF had previously issued warnings about the emerging misuse of artificial intelligence, and its latest report underscores a surge in this concerning trend. Susie Hargreaves, CEO of the IWF, lamented that “our worst fears have become a reality.” The IWF also noted instances of AI-generated images being marketed on the internet. These latest findings resulted from a month-long inquiry into a child abuse forum on the “dark web,” a segment of the internet accessible exclusively through a specialized browser. During the investigation of 11,108 images on the forum, 2,978 of them were determined to be in violation of British law for depicting child sexual abuse.

AI-generated child sexual abuse content falls under the purview of the Child Protection Act 1978, which criminalizes the download, distribution, and possession of an “indecent photograph or pseudo-photograph” of a child. The IWF pointed out that the majority of the illegal material discovered was in violation of the Child Protection Act, with more than one in five of these images classified as category A, the most severe type of content that may involve depictions of rape and sexual abuse.