WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’::By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’

  • generalpotato@lemmy.world
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    1
    ·
    1 year ago

    Systemic prejudices showing up in datasets causing generative systems to spew biased output? Gasp… say it isn’t so?

    I’m not sure why this is surprising anymore. This is literally expected behavior unless we get our shit together and get a grip on these systemic problems. The rest of it all is just patch work and bandages.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      I’d like to point out that not everything generative is a subset of all the ML stuff. So prejudices in datasets do not affect everything generative.

      That’s off the topic, just playing with such a thing as generative music now. Started with SuperCollider, but it was too hard (maybe not anymore TBF, probably recycling a phrase, for example, would be much easier and faster there than in my macaroni shell script) so now I just generate ABC, convert it to MIDI with various instruments, and use FluidSynth.