For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest use of AI technology striving to expose unreported CSAM at scale.

An expansion of Thorn’s CSAM detection tool, Safer, the new “Predict” feature uses “advanced machine learning (ML) classification models” to “detect new or previously unreported CSAM and child sexual exploitation behavior (CSE), generating a risk score to make human decisions easier and faster.”

The model was trained in part using data from the National Center for Missing and Exploited Children (NCMEC) CyberTipline, relying on real CSAM data to detect patterns in harmful images and videos. Once suspected CSAM is flagged, a human reviewer remains in the loop to ensure oversight. It could potentially be used to probe suspected CSAM rings proliferating online.

  • ben@lemmy.zip
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    1 month ago

    This sounds like a bad idea, there’s already cases of people getting flagged for CSAM by sending photos of their children to doctors.

    • starman2112@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 month ago

      https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html

      Here’s the article. Imagine losing access to everything that your physical driver’s license can’t help you get back. I would be in jail for one reason or another if google fucked my life over that bad

      As for Mark, Ms. Lilley, at Google, said that reviewers had not detected a rash or redness in the photos he took and that the subsequent review of his account turned up a video from six months earlier that Google also considered problematic, of a young child lying in bed with an unclothed woman.

      Mark did not remember this video and no longer had access to it, but he said it sounded like a private moment he would have been inspired to capture, not realizing it would ever be viewed or judged by anyone else.

      They could have just made this up wholesale. What is Mark gonna do about it? He literally doesn’t have access to the video they claim incriminates him, and the police department has already cleared him of any wrongdoing. Google is just being malicious at this point.