The naked database of AI image generator reveals the database people actually use

In addition to CSAM, there are also AI-generated pornographic images in the database, as well as potential “face discount” images, Fowler said. In the file, he observed photos that appeared to be real people, which might be used to create “definite or sexual AI-generated images.” “So, they are taking real photos of people and changing their faces there,” he claimed.
When it is live, the Gennomis website allows for explicit AI adult images. Many images show on their homepage, and the AI ”model” section includes sexual images of women – some are “realistic” while others are fully AI-generated or animated. It also includes an “NSFW” gallery and “market” where users can share images and have the potential to sell albums of AI-generated photos. The website’s slogan says that people can “generate unrestricted” images and videos. Previous versions of the site said starting in 2024 that “uncensored images” could be created.
Gennomis’ user policy states that only “respective content” is allowed, saying “clear violence” and hate speech are banned. Its community guide reads: “Child pornography and any other illegal activities are strictly prohibited.” Accounts that post banned content will be terminated. (In the past decade, researchers, victim advocates, journalists, tech companies, etc. have largely phased out the term “child pornography” in favor of CSAM).
It is unclear to what extent Gennomis uses any restraint tools or systems to prevent or prohibit the creation of AI-generated CSAM. Last year, some users posted on their “community” pages where they were unable to produce images of sexual behavior and their prompts were blocked from non-gender “dark humor.” Another account posted on the community page should address the “NSFW” content, as it “maybe viewed by the Fed.”
“If I could see these images were nothing more than URLs, it would indicate that I didn’t take all the necessary steps to block the content,” Fowler accused the database.
Henry Ajder, a Deepfake expert and founder of a potential space consulting firm, said that even if the company does not allow the creation of harmful and illegal content, the website’s branding – citing “unrestricted” image creation and “NSFW” sections, may have a clear connection to “unrelated content.” ”
Ajade said he was surprised that the English website was related to Korean entities. Last year, the country’s involuntary deep “emergency” plagued the country before taking measures to combat the wave of deep abuse. Ajder said greater pressure needs to be applied across all parts of the ecosystem, which allows the use of AI to generate involuntary images. “The more we see, the more we force the problem to go to the tech platform, to the web hosting companies, to the payment providers. All people in some form or other form (mostly unconsciously) are promoting and achieving that,” he said.
Fowler said the database also exposed files that appeared to include AI prompts. Researchers say no user data (such as login or username) is included in the exposed data. The screenshot of the prompt shows words like “tiny”, “girl” and mentions sexual behavior among family members. The prompt also includes sexual behavior between celebrities.
“In my opinion, this technology is ahead of any guide or control,” Fowler said. “From a legal standpoint, we all know that children’s explicit images are illegal, but that doesn’t prevent the technology from being able to generate these images.”
As the generated AI system greatly enhances the ease of creating and modifying images in the past two years, AI-generated CSAM exploded. “Since 2023, more than three times the web pages containing AI-generated child sexual abuse material, and the film and television nature of this horror content is becoming more and more complex.
The IWF documented how criminals increasingly created AI-generated CSAM and developed the methods they used to create it. “Currently, it’s very easy for criminals to use AI to generate and distribute children’s sexually explicit content at scale and at a time,” Ray-Hill said.