25 Arrested in Global Action against AI-generated content of child sexual abuse, Europol says

Hague Europol said Friday that a global campaign has resulted in at least 25 arrests on artificial intelligence-induced child sexual abuse and online distribution.
“The Cumberland County Operation has been one of the earliest cases involved AI-generated child sexual abuse Materials, due to the lack of national legislation against these crimes, this material is very challenging for investigators. ” the European Police said in a statement.
Most arrests were conducted in a global operation led by the Danish Police, which also involved law enforcement agencies from the EU, Australia, the UK, Canada and New Zealand. According to Europol, U.S. law enforcement agencies did not participate in the operation.
After the arrest of the main suspect arrested in November last year, the Danish national runs an online platform where he distributes the AI materials he produces.
Europol said that after “the symbolic online payment, users from all over the world are able to obtain a password to access the platform and watch the abuse of children.”
The agency warns that online child sexual exploitation remains one of the most threatening manifestations of cybercrime in the EU.
“It remains one of the top priorities of law enforcement agencies, which are dealing with growing illegal content,” it said.
Although Europol said Cumberland County’s actions aimed at a platform and shared people with content created entirely with AI, the surge in online AI manipulation of “Deepfake” images that often use images of real people, including children, and can have a destructive impact on their lives.
according to Report CBS News’ Jim Axelrod focused on a girl abused by her classmates in December, with over 21,000 Deepfake pornographic pictures or videos online in 2023, an increase of more than 460% from a year ago. this Manipulate content As lawmakers in the United States and elsewhere have grown, it has surged on the internet to catch up with new legislation to solve the problem.
Just a few weeks ago, the Senate passed a bipartisan bill “Knockdown Behavior” If signed into law, the publication of non-consensual intimate images (NCII), including AI-generated NCII (or “Deepfake Revenge Revenge Pornogathy”), will be published and social media and similar websites will be required to implement procedures to remove such content from 48 hours within 48 hours of the victim, thereby removing such content from the victim’s 48 hours.”
For now, some social media platforms seem unable or unwilling to repress the spread of deep content generated by AI, including fake images of celebrities. In mid-February, Facebook and Instagram boss Meta said it had removed fraudulent sexual images of more than a dozen famous actresses and athletes CBS News Investigation Discovers A-manipulates DeepFake images on Facebook with high prevalence.
“This is an industry-wide challenge and we are working to improve our detection and law enforcement technology,” Meta spokesman Erin Logan told CBS News in a statement released by email at the time.