• Home
  • AI News
  • Bookmarks
  • Contact US
Reading: The AI-Generated Child Abuse Nightmare Is Here
Share
Notification
Aa
  • Inspiration
  • Thinking
  • Learning
  • Attitude
  • Creative Insight
  • Innovation
Search
  • Home
  • Categories
    • Creative Insight
    • Thinking
    • Innovation
    • Inspiration
    • Learning
  • Bookmarks
    • My Bookmarks
  • More Foxiz
    • Blog Index
    • Sitemap
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
> Blog > AI News > The AI-Generated Child Abuse Nightmare Is Here
AI News

The AI-Generated Child Abuse Nightmare Is Here

admin
Last updated: 2023/10/24 at 11:01 PM
admin
Share
5 Min Read

A horrific new era of ultrarealistic, AI-generated, child sexual abuse images is now underway, experts warn. Offenders are using downloadable open source generative AI models, which can produce images, to devastating effects. The technology is being used to create hundreds of new images of children who have previously been abused. Offenders are sharing datasets of abuse images that can be used to customize AI models, and they’re starting to sell monthly subscriptions to AI-generated child sexual abuse material (CSAM).

The details of how the technology is being abused are included in a new, wide-ranging report released by the Internet Watch Foundation (IWF), a nonprofit based in the UK that scours and removes abuse content from the web. In June, the IWF said it had found seven URLs on the open web containing suspected AI-made material. Now its investigation into one dark web CSAM forum, providing a snapshot of how AI is being used, has found almost 3,000 AI-generated images that the IWF considers illegal under UK law.

The AI-generated images include the rape of babies and toddlers, famous preteen children being abused, as well as BDSM content featuring teenagers, according to the IWF research. “We’ve seen demands, discussions, and actual examples of child sex abuse material featuring celebrities,” says Dan Sexton, the chief technology officer at the IWF. Sometimes, Sexton says, celebrities are de-aged to look like children. In other instances, adult celebrities are portrayed as those abusing children.

While reports of AI-generated CSAM are still dwarfed by the number of real abuse images and videos found online, Sexton says he is alarmed at the speed of the development and the potential it creates for new kinds of abusive images. The findings are consistent with other groups investigating the spread of CSAM online. In one shared database, investigators around the world have flagged 13,500 AI-generated images of child sexual abuse and exploitation, Lloyd Richardson, the director of information technology at the Canadian Centre for Child Protection, tells WIRED. “That’s just the tip of the iceberg,” Richardson says.

- Advertisement -
Ad imageAd image

A Realistic Nightmare

The current crop of AI image generators—capable of producing compelling art, realistic photographs, and outlandish designs—provide a new kind of creativity and a promise to change art forever. They’ve also been used to create convincing fakes, like Balenciaga Pope and an early version of Donald Trump’s arrest. The systems are trained on huge volumes of existing images, often scraped from the web without permission, and allow images to be created from simple text prompts. Asking for an “elephant wearing a hat” will result in just that.

It’s not a surprise that offenders creating CSAM have adopted image-generation tools. “The way that these images are being generated is, typically, they are using openly available software,” Sexton says. Offenders whom the IWF has seen frequently reference Stable Diffusion, an AI model made available by UK-based firm Stability AI. The company did not respond to WIRED’s request for comment. In the second version of its software, released at the end of last year, the company changed its model to make it harder for people to create CSAM and other nude images.

Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”

admin Oktober 24, 2023 Oktober 24, 2023
Share this Article
Facebook Twitter Email Copy Link Print
Leave a comment Leave a comment

Schreibe einen Kommentar Antworten abbrechen

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Follow US

Find US on Social Medias
Facebook Like
Twitter Follow
Youtube Subscribe
Telegram Follow
newsletter featurednewsletter featured

Subscribe Newsletter

Subscribe to our newsletter to get our newest articles instantly!

[mc4wp_form]

Popular News

Chatbots vs. Virtual Assistants: Understanding the Differences
Juli 21, 2023
Google Circle to Search and AI-Powered Multi-Search Coming to Mobile
Januar 17, 2024
Mods Are Asleep. Quick, Everyone Release AI Products
November 22, 2023
Google Prepares for a Future Where Search Isn’t King
Februar 8, 2024

Quick Links

  • Home
  • AI News
  • My Bookmarks
  • Privacy Policy
  • Contact
Facebook Like
Twitter Follow

© All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?