Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    7 days ago

    When I saw this, 2 questions came to mind: How come that this isn’t immediately reported? Why would anyone upload illegal material to a platform that tracks as thoroughly as Meta’s do?

    The answer is:

    All of those accounts followed the same visual pattern: blonde characters with voluptuous bodies and ample breasts, blue eyes, and childlike faces.

    The 1 question that came to mind upon reading this is: What?

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    ·
    7 days ago

    after contact, the company acknowledged the problem and removed the accounts

    Meta is outsourcing content moderation to journalists.

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 days ago

      Meta profits from these accounts, it also profits off scams and fraud posts, because they pay for ad space. They have literally no incentive to moderate beyond the bare minimum their automatic tools do

  • Aaron Doe@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    7 days ago

    Parents should get their kids to never touch anything “Meta” made or brought.

    But then again, them same parents are currently telling the world what their neighbours are doing, what they’re eating and how cute did “insert name here” look in their new school uniform. 🤦‍♂️

    • 3DMVR@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      to bad vrs got a hold and vrchats so much worse than the internet chatrooms we grew up with

      • Aaron Doe@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 days ago

        Yeah without doubt we’re entering a weird and scary time with all this non-consensual AI training and data models.

        Especially with the amount of data Meta has across all its platforms.

  • Bakkoda@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    20
    ·
    7 days ago

    Please, please, please abandon these platforms. Just stop using them. There’s a cycle to these things and once they are past the due date all that’s left is rotten. It really is as simple as stop using their platform.

    • AutomaticButt@lemm.ee
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      7 days ago

      The most compelling argument against AI generated child porn I have heard is it normalizes it and makes it more likely people will be unable to tell if it is real or AI. This allows actual children to get hurt when it is not reported or skimmed over because someone thought it was AI.

      • Cryophilia@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        4
        ·
        7 days ago

        With a set of all images on the internet. Why do you people always think this is a “gotcha”?

          • surewhynotlem@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            3
            ·
            7 days ago

            Hey.

            I’ve been in tech for 20 years. I know python, Java, c#. I’ve worked with tensorflow and language models. I understand this stuff.

            You absolutely could train an AI on safe material to do what you’re saying.

            Stable diffusion and openai have not guaranteed that they trained their AI on safe materials.

            It’s like going to buy a burger, and the restaurant says “We can’t guarantee there’s no human meat in here”. At best it’s lazy. At worst it’s abusive.

            • Captain Aggravated@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              10
              ·
              7 days ago

              I mean, there is no photograph of a school bus with pegasus wings diving to the titanic, but I bet one of these AIs can crank out that picture. If it can do that…?

            • Cryophilia@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              7 days ago

              Ok, but by that definition Google should be banned because their trawler isn’t guaranteed to not pick up CP.

              In my opinion, if the technology involves casting a huge net, and then creating an abstracted product from what is caught in the net, with no steps in between seen by a human, then is it really causing any sort of actual harm?

  • Infynis@midwest.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 days ago

    … Meta’s security systems were unable to identify…

    I think you mean incentivized to ignore

  • lepinkainen@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 days ago

    Meta doesn’t care about AI generated content. There are thousands of fake accounts with varying quality of AI generated content and reporting them does exactly shit.

  • FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    13
    ·
    7 days ago

    Child Sexual Abuse Material is abhorrent because children were literally abused to create it.

    AI generated content, though disgusting, is not even remotely on the same level.

    The moral panic around AI that leads to implying that these things are the same thing is absurd.

    Go after the people filming themselves literally gang raping toddlers, not the people typing forbidden words into an image generator.

    Don’t dilute the horror of the production CSAM by equating it to fake pictures.

    • suicidaleggroll@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      5
      ·
      edit-2
      7 days ago

      Yes at a cursory glance that’s true. AI generated images don’t involve the abuse of children, that’s great. The problem is what the follow-on effects of this is. What’s to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it’s AI generated?

      AI image generation is getting absurdly good now, nearly indistinguishable from actual pictures. By the end of the year I suspect they will be truly indistinguishable. When that happens, how do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn’t if AI-generated CP is legal?

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        7 days ago

        What’s the follow on effect from making generated images illegal?

        Do you want your freedom to be at stake where the question before the Jury is “How old is this image of a person (that doesn’t exist?)”. “Is this fake person TOO child-like?”

        When that happens, how do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn’t if AI-generated CP is legal?

        You won’t be able to tell, we can assume that this is a given.

        So the real question is:

        Who are you trying to arrest and put in jail and how are you going to write that difference into law so that innocent people are not harmed by the justice system?

        To me, the evil people are the ones harming actual children. Trying to blur the line between them and people who generate images is a morally confused position.

        There’s a clear distinction between the two groups and that distinction is that one group is harming people.

      • ExLisper@lemmy.curiana.net
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        7 days ago

        If pedophiles won’t be able to tell what’s real and what’s AI generated why risk jail to create the real ones?

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      7 days ago

      Although that’s true, such material can easily be used to groom children which is where I think the real danger lies.

      I really wish they had excluded children in the datasets.

      You can’t really put a stop to it anymore but I don’t think it should be something that’s normalized and accepted just because there isn’t a direct victim anymore. We are also talking about distribution here and not something being done in private at home.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          7 days ago

          Kids will do things if they see other children doing it in pictures and videos. It’s easier to normalize sexual behavior with cp then without.

          • Cryophilia@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            3
            ·
            7 days ago

            This sounds like you’re searching really hard for a reason to justify banning it. Pretty tenuous “what if” there.

            Like, a dildo could hypothetically be used to sexualize a child. Should we ban dildos?

            It’s so vague it could apply to anything.

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              edit-2
              7 days ago

              Banning the tech, banning generated cp on the internet or banning it at home?

              I’m a big advocate of AI and don’t personally want any kind of banning or censorship of the tools.

              I don’t think it should be published on any kind of image sharing sites. I don’t hold people publishing it in high regard and I’m not against some kind of consequence. I generally view prison as unproductive though.

              At home, I’m not sure. People imo can do what they want behind closed doors. I don’t want any kind of surveillance but I don’t know how I would react if it got brought up at a trial, as a kind of proof if the allegations have something to do with that theme (child molestation).

              I also don’t think we need much of a reason to ban it on the web.

              • Cryophilia@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                ·
                7 days ago

                It would probably make me distrust the prosecution, like if they’re bringing this up they must not have much to go on. Like every time a black man is shot by police they bring up that he smoked weed.

                I guess my main complaint is that it’s insane to view it as equivalent to real CP, and it’s harmful to waste any resources prosecuting it.

                • Grimy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 days ago

                  That’s fair. We can also expect proper moderation from social media sites. I’m okay with a light touch but It shouldn’t be floating around if you get what I mean.

            • Chocobofangirl@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 days ago

              On the one hand, DNS was being needlessly accusatory and the logic of ‘you don’t understand how predators work so you must be one’ is silly. On the other hand, I get why they’re being so caustic because YES CP is ABSOLUTELY used exactly how they describe. The idea is that by getting the child used to sexual activity, they’ll get used to thinking about sexual activity and won’t be as freaked by inappropriate propositions, perhaps even believing they’re the initiator instead of being manipulated and taken advantage of, and then they won’t report the predator to authorities. Not to mention some of the predators who actually feel child attraction (as opposed to the more than 50% who are just rapists of opportunity) use that manufactured consent to self-delude themselves into thinking ‘well they’re enjoying it and they said yes so I’m not REALLY doing anything wrong’.

              Part 4 on this article interviewing someone who was trying to research pedos on the dark web, for one example: “In other words, there are child molestation crusaders out there, and Pam ran into a lot of this on the Deep Web. Below is one response to a 7axxn post from a guy, bemoaning his inability to be anything but a “leech” (a person who consumes the content but never submits any) because his family situation made it impossible to actively share child pornography. The other members suggested he could aid “the cause” by helping to “enlighten & educate” the children in his life on the “true philosophies of love”” https://www.cracked.com/personal-experiences-1760-5-things-i-learned-infiltrating-deep-web-child-molesters.html

              • Cryophilia@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 days ago

                They’re being so caustic because they get a sick power rush from trying unleash a mob on someone. That’s all. They probably don’t give a shit about children.