• Gaywallet (they/it)@beehaw.orgOP
      link
      fedilink
      arrow-up
      29
      ·
      edit-2
      10 months ago

      You’re absolutely correct, yet ask someone who’s very pro AI and they might dismiss such claims as “needing better prompts”. Also many people may not be as tech informed as you are, and bringing light to algorithmic bias can help them understand and navigate the world we now live in. Dismissing the article just because you already know the answer doesn’t really encourage people to participate in a discussion.

      • helenslunch@feddit.nl
        link
        fedilink
        arrow-up
        9
        ·
        10 months ago

        Dismissing the article just because you already know the answer doesn’t really encourage people to participate in a discussion.

        If the author doesn’t know the answer, then it is helpful to provide it. If they know the answer, then why are they phrasing the title as a question?

        • MBM@lemmings.world
          link
          fedilink
          arrow-up
          6
          ·
          10 months ago

          If you genuinely don’t know: because it’s an attention-grabbing title (which isn’t inherently bad)

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    36
    ·
    edit-2
    10 months ago

    Wrong question. The right question would be:

    Why is AI as used in Lensa’s Magic Avatars App Pornifying Asian Women?

    Ask Lensa to remove the “ugly” and similar negative prompts from their avatar generating App, and let’s see what comes out.

    https://stable-diffusion-art.com/how-to-use-negative-prompts/#Universal_negative_prompt

    For reference, check out how that same negative prompt turns a chubby-ish poorly shaved average guy, into a male pornstar, or a valet into a rich daddy’s boy.

    • smeg@feddit.uk
      link
      fedilink
      English
      arrow-up
      10
      ·
      10 months ago

      Can we please collectively get into the habit of editing these borderline-clickbait titles or at least add sub-titles explaining the real article? This isn’t reddit where you can’t edit anything and can’t add explanatory text!

  • megopie@beehaw.org
    link
    fedilink
    arrow-up
    33
    ·
    edit-2
    10 months ago

    If I had to guess, they probably did a shit job labeling training data or used pre labeled images, now where in the world could they have found huge amounts of pictures of women on the internet with the specific label of “Asian”?

    Almost like, most of what determines the quality of the output is not “prompt engineering” but actually the back end work of labeling the training data properly, and you’re not actually saving much labor over more traditional methods, just making the labor more anonymous, easier to hide, and thus easier to exploit and devalue.

    Almost like this shit is a massive farce just like the “meta verse” and crypto that will fail to be market viable and waist a shit ton of money that could have been spent on actually useful things.

  • Muffi@programming.dev
    link
    fedilink
    arrow-up
    25
    ·
    10 months ago

    Scroll through the trained models on civit.ai and you’ll quickly get a feeling of the dystopian level of “prettifying” everything in the AI-generation world.

    I also once searched for “brown” just to see if any models were trained to create non-white-skinned people, and got shocked when the result was filled with models trained on Millie Bobby Brown from Stranger Things. I don’t even want to know what those models are used for.

    • ExLisper@linux.community
      link
      fedilink
      English
      arrow-up
      19
      ·
      10 months ago

      dystopian level of “prettifying” everything in the AI-generation world.

      So like all the ad campaigns, TV shows and movies in the real world?

    • anachronist@midwest.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      I work in tech and asian guys tend to outnumber white guys in it, especially if you combine east asian and south asian.

    • 1984@lemmy.today
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      10 months ago

      In 2024, the brain washing of people is almost complete.

      Sensuality is now porn. :)

  • Omega_Haxors@lemmy.ml
    link
    fedilink
    arrow-up
    20
    ·
    10 months ago

    Stable Diffusion is little more than content laundering. It cannot create anything more than what you put in.

  • millie@beehaw.org
    link
    fedilink
    English
    arrow-up
    15
    ·
    10 months ago

    I’m not exposed to a huge amount of media coming out of Asia, outside of a handful of Korean shows that Netflix has picked up and anime. But like, if anime is any indicator, I’m not really surprised that the training data for Asian women is leaning more toward overt sexualization. Even setting aside the whole misogynistic ‘fan service’ thing, I don’t feel like I see as much representation of women who defy traditional gender roles as the last twenty or so years of Western media.

    It certainly could be that anime is actually a huge outlier here, but if the training data is primarily from the English speaking web, it might be overrepresented anyway. But like, when it comes to weird AI image behaviors, it pays to think about the probable training data.

    Like, stable diffusion seems to do a better job of rendering jewelry if you tell it to surround it with berries. Given the output, this seems to be due to Christmas themed jewelry ads. They also tend to add a lot of bokeh for the same reason.

    • IHeartBadCode@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      10 months ago

      Absolutely this. The reason AI defaults female into “female armor mode” is the same reason Excel has January February Maruary. Our spicy autocorrect overlords cannot extrapolate data in a direction that it’s training has no knowledge of.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      You train on a bunch of reddit crap, you’re going to get neck beard reddit crap out. It’d look different if they only used art history books.

  • sculd@beehaw.org
    link
    fedilink
    arrow-up
    12
    ·
    10 months ago

    Looking at some of the replied that tried to dismiss the issue and the general lack of concern from moderators against aggressive replies from AI apologists (in this thread but also other AI related threads) are disheartening.