• simple@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      If this is implemented right it should flag accounts so human reviewers can follow up on it, not take action on its own.

      • Inucune@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        Even still, the ‘flag’ could be enough damning evidence for some people to take action. We’re in the cultural ‘guilty until proven innocent’ territory, where a mere accusation ruins lives.

  • Dem Bosain@midwest.social
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    nicknamed SERI for “Stop CybERgroomIng.”

    Maybe we should have an AI that generates proper acronyms.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 months ago

    I guess most people don’t get how terrifyingly dystopian this is.

    In the EU, there is a serious push to make this mandatory.

  • devfuuu@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    3 months ago

    Get ready when the ai bots start behaving like chidren to bait and create a relationship with people.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      This has already happened. There was a news article about a police force who used AI to bait groomers. This is further automation in something that’s already being done.

  • sucrerey@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    weird question: if this worked, couldnt the same dataset be used to create a very skillful AI cybergroomer chatbot if it fell into the wrong hands?