Canadian Sikh Facebook users receive notifications that their posts are being taken down because they’re in violation of Indian law

  • Steeve@lemmy.ca
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    5
    ·
    edit-2
    1 year ago

    So they received a take down request from the Indian government, mistook the users for being in India, followed the law that they’re required to follow in India, and when it was brought to their attention that those users were actually based in Canada they went back and allowed the posts. This doesn’t seem as malicious as people are making it out to be, they should probably work on their geo-blocking, but with 3 billion users in 150+ countries with their own local laws it’s probably safer to be aggressive when it comes to removing content when requested.

    • blackfire@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      I think this comes under the ‘Never attribute to malice that which is adequately explained by stupidity’ Hanlons razor

    • bobman@unilem.org
      link
      fedilink
      arrow-up
      8
      arrow-down
      5
      ·
      1 year ago

      ‘Guilty until proven innocent.’

      Glad corporations get the power to make these decisions.

      • Steeve@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Well they don’t, hence why they’re taking down posts as required by the countries they operate in and willing to accept a noticable false positive rate to do it.

        • bobman@unilem.org
          link
          fedilink
          arrow-up
          1
          arrow-down
          5
          ·
          1 year ago

          What are you talking about?

          What requirement is there in India for Facebook to ban Canadians?

          • Steeve@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            1 year ago

            and willing to accept a noticable false positive rate to do it.

            It’d probably help if you fully read the comments you’re replying to lol

            • bobman@unilem.org
              link
              fedilink
              arrow-up
              3
              arrow-down
              5
              ·
              1 year ago

              So… guilty until proven innocent.

              Like I said. From the very beginning.

              • Steeve@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                1 year ago

                Your first comment was incredibly vague… I was responding to this part:

                Glad corporations get the power to make these decisions.

                However, a high false positive rate is different than assuming every post is “guilty until proven innocent”, and they aren’t mutually exclusive either. Current example here would be the automated removal of CSAM on Lemmy. A model was built to remove CSAM and it has a high rate of false positives. Does this mean that it assumes everything is CSAM until it’s able to confirm it isn’t? No. It could work that way, that’s an implementation detail that I don’t know the specifics of, but it doesn’t necessarily mean it does.

                But really, who cares? The false positive rate matters for site usability for sure, but the rest is an implementation detail in an AI model, it isn’t the court of law. Nobody’s putting you in Facebook prison because they accidentally mistook your post for rule breaking.