• xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    arrow-down
    6
    ·
    6 months ago

    It would be more like outlawing ivory grand pianos because they require dead elephants to make - the AI models under question here were trained on abuse.

    • Darkassassin07@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 months ago

      A person (the arrested software engineer from the article) acquired a tool (a copy of Stable Diffusion, available on github) and used it to commit crime (trained it to generate CSAM + used it to generate CSAM).

      That has nothing to do with the developer of the AI, and everything to do with the person using it. (hence the arrest…)

      I stand by my analogy.

      • xmunk@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        6 months ago

        Unfortunately the developer trained it on some CSAM which I think means they’re not free of guilt - we really need to rebuild these models from the ground up to be free of that taint.

        • Darkassassin07@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 months ago

          Reading that article:

          Given it’s public dataset not owned or maintained by the developers of Stable Diffusion; I wouldn’t consider that their fault either.

          I think it’s reasonable to expect a dataset like that should have had screening measures to prevent that kind of data being imported in the first place. It shouldn’t be on users (here meaning the devs of Stable Diffusion) of that data to ensure there’s no illegal content within the billions of images in a public dataset.

          That’s a different story now that users have been informed of the content within this particular data, but I don’t think it should have been assumed to be their responsibility from the beginning.