AI Image-Generators Are Being Trained on Child Abuse Materials, an Study from Stanford Shows

IBL News | New York

A massive public dataset named ‘LAION-5B’ that served as training data for popular AI image generators such as Stable Diffusion was found to contain thousands of instances of child sexual abuse material (CSAM), stated a study published yesterday by Stanford Internet Observatory (SIO), a watchdog group based at the Californian university.

This organization urged companies to take action to address a harmful flaw in the technology they build. Removal of the identified source material was currently in progress.

The report found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers,

The Stanford Internet Observatory (SIO) worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement.

These entities examined the LAION-5B dataset using a combination of PhotoDNA perceptual hash matching, cryptographic hash matching, k-nearest neighbors queries, and ML classifiers.

“This methodology detected many hundreds of instances of known CSAM in the training set, as well as many new candidates that were subsequently verified by outside parties. We also provide recommendations for mitigating this issue for those that need to maintain copies of this training set, building future training sets, altering existing models, and the hosting of models trained on LAION-5B.”

LAION-5B doesn’t include the images themselves and is instead a collection of metadata including a hash of the image identifier, a description, language data, whether it may be unsafe, and a URL pointing to the image. A number of the CSAM photos found linked in LAION-5B were hosted on websites like Reddit, Twitter, Blogspot, and WordPress, as well as adult websites like XHamster and XVideos.

The German non-profit LAION said that “it has a zero-tolerance policy for illegal content,” and announced that their public datasets would be temporarily taken down, to return back after update filtering in the second half of January.
.