Danish researchers created a private self-harm network on the social media platform, including fake profiles of people as young as 13 years old, in which they shared 85 pieces of self-harm-related content gradually increasing in severity, including blood, razor blades and encouragement of self-harm.

The aim of the study was to test Meta’s claim that it had significantly improved its processes for removing harmful content, which it says now uses artificial intelligence (AI). The tech company claims to remove about 99% of harmful content before it is reported.

But Digitalt Ansvar (Digital Accountability), an organisation that promotes responsible digital development, found that in the month-long experiment not a single image was removed.

rather than attempt to shut down the self-harm network, Instagram’s algorithm was actively helping it to expand. The research suggested that 13-year-olds become friends with all members of the self-harm group after they were connected with one of its members.

Comments

    • rowinxavier@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      3 months ago

      The claim by Meta that they block this type of material combined with the existing spread of this type of material mean that adding a temporary source of material does not carry the same level of harm as may be expected. Testing if Meta does in fact remove this type of content and finding it failing may reasonably be expected to lead to changes which would reduce the amount of this type of material. The net result is a very small, essentially marginal increase in the amount of self harm material and a fuller understanding of the efficacy of Meta filtering systems. If I were on the ethics board I would approve.

    • OutlierBlue@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      3 months ago

      Maybe the ethics board uses AI, claiming to remove about 99% of harmful studies before they are approved.

    • ilmagico@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 months ago

      The group was private and they created fake profiles … did I miss something?

      • Darth_Mew@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        yea you did. the “fake” profiles could’ve been made by any one and sent that to non private groups and should’ve been blocked. them being “fake accounts” doesn’t take away from the claims meta makes about 99% of this type of content being removed. please use your brain

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    3 months ago

    At least one country on earth is starting to get serious about regulating social media. Until there are real financial consequences for this, there won’t be any meaningful change.

  • hash@slrpnk.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 months ago

    Meta will play damage control and introduce a feature which might help a little for a few weeks. There are other options on the table internally which might actually have a meaningful effect, but they would significantly pull down engagement so…

  • GHiLA@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    3 months ago

    It won’t end and will continue until society collapses because we never learn anything.

    Your best recourse is to do everything in your power as a parent to prevent your child from using this garbage considering it’s here to stay.

    • hedgehogging_the_bed@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      3 months ago

      Bullshit. Our best recourse as parents is to talk to our children every day to ensure their life has people who will listen and understand them as a constant presence, instead of random strangers on the Internet. Just exposure to this shit isn’t the toxic part. It’s the constant exposure without context and support of caring adults to help kids contextualize the information. Just like sex, alcohol, and every other complex “adult” thing.

      • GHiLA@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        edit-2
        3 months ago

        Bullshit. Our best course of action is to ditch technology entirely, and live as farmers in a communal society that seeks a symbiotic exposure to nature and a closer attachment to family and neighbors. We’d all have better sex, better alcohol and more artisanal adult things.

        crosses arms

        The one-upping crap is cringe and the most Lemmy thing on earth and I wish we’d stop it.

        You can expand on a conversation without drawing a sword on the last guy.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    3 months ago

    In other news, science has indications the sun may be hot as a muthafucka.