• Asudox@lemmy.world
    link
    fedilink
    arrow-up
    98
    ·
    1 month ago

    Block? Nope, robots.txt does not block the bots. It’s just a text file that says: “Hey robot X, please do not crawl my website. Thanks :>”

    • ɐɥO@lemmy.ohaa.xyz
      link
      fedilink
      arrow-up
      53
      ·
      1 month ago

      I disallow a page in my robots.txt and ip-ban everyone who goes there. Thats pretty effective.

    • Cynicus Rex@lemmy.mlOP
      link
      fedilink
      arrow-up
      11
      ·
      1 month ago

      Unfortunate indeed.

      “Can AI bots ignore my robots.txt file? Well-established companies such as Google and OpenAI typically adhere to robots.txt protocols. But some poorly designed AI bots will ignore your robots.txt.”

      • breadsmasher@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        1 month ago

        typically adhere. but they don’t have to follow it.

        poorly designed AI bots

        Is it a poor design if its explicitly a design choice to ignore it entirely to scrape as much data as possible? Id argue its more AI bots designed to scrape everything regardless of robots.txt. That’s the intention. Asshole design vs poor design.

    • majestictechie@lemmy.fosshost.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      This is why I block in a htaccess:

      # Bot Agent Block Rule
      RewriteEngine On
      RewriteCond %{HTTP_USER_AGENT} (BOTNAME|BOTNAME2|BOTNAME3) [NC]
      RewriteRule (.*) - [F,L]
      
  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    arrow-up
    24
    ·
    1 month ago

    This does not block anything at all.

    It’s a 1994 “standard” that requires voluntary compliance and the user-agent is a string set by the operator of the tool used to access your site.

    https://en.m.wikipedia.org/wiki/Robots.txt

    https://en.m.wikipedia.org/wiki/User-Agent_header

    In other words, the bot operator can ignore your robots.txt file and if you check your webserver logs, they can set their user-agent to whatever they like, so you cannot tell if they are ignoring you.

    • Cynicus Rex@lemmy.mlOP
      link
      fedilink
      arrow-up
      7
      arrow-down
      9
      ·
      edit-2
      1 month ago

      Lies, as in that it’s not really “blocking” but a mere unenforceable request? If you meant something else could you please point it out?

      • Da Bald Eagul@feddit.nl
        link
        fedilink
        arrow-up
        33
        ·
        1 month ago

        That is what they meant, yes. The title promises a block, completely preventing crawlers from accessing the site. That is not what is delivered.

  • digdilem@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 month ago

    robots.txt does not work. I don’t think it ever has - it’s an honour system with no penalty for ignoring it.

    I have a few low traffic sites hosted at home, and when a crawler takes an interest they can totally flood my connection. I’m using cloudflare and being incredibly aggressive with my filtering but so many bots are ignoring robots.txt as well as lying about who they are with humanesque UAs that it’s having a real impact on my ability to provide the sites for humans.

    Over the past year it’s got around ten times worse. I woke up this morning to find my connection at a crawl and on checking the logs, AmazonBot has been hitting one site 12000 times an hour, and that’s one of the more well-behaved bots. But there’s thousands and thousands of them.

  • NullPointer@programming.dev
    link
    fedilink
    arrow-up
    19
    ·
    1 month ago

    robots.txt will not block a bad bot, but you can use it to lure the bad bots into a “bot-trap” so you can ban them in an automated fashion.

    • Dave.@aussie.zone
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 month ago

      I’m guessing something like:

      Robots.txt: Do not index this particular area.

      Main page: invisible link to particular area at top of page, with alt text of “don’t follow this, it’s just a bot trap” for screen readers and such.

      Result: any access to said particular area equals insta-ban for that IP. Maybe just for 24 hours so nosy humans can get back to enjoying your site.

          • doodledup@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            1 month ago

            You misunderstand. Sometimes you want your public website to be indexed by search engines but not scraped for the next LLM model. If you disallow scraping alltogether, then you won’t be indexed on the internet. That can be a problem.

            • ɐɥO@lemmy.ohaa.xyz
              link
              fedilink
              arrow-up
              6
              ·
              1 month ago

              I know that. Thats why I dont ban everyone but only those who dont follow the rules inside my robots.txt. All “sane” search engine crawlers should follow those so its no problem

        • mox@lemmy.sdf.org
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          1 month ago

          Robots.txt: Do not index this particular area.

          Problem is that you’re also blocking search engines to index your site, no?

          No. That’s why they wrote “this particular area”.

          The point is to have an area of the site that serves no purpose other than to catch bots that ignore the rules in robots.txt. Legit search engine indexers will respect directives in robots.txt to avoid that area; they will still index everything else. Bad bots will ignore the directives, index the forbidden area anyway, and by doing so, reveal themselves in the server logs.

          That’s the trap, aka honeypot.

  • 5opn0o30@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    5
    ·
    1 month ago

    Wow. A lot of cynicism here. The AI bots are (currently) honoring robots.txt so this is an easy way to say go away. Honeypot urls can be a second line of defense as well as blocking published IP ranges. They’re no different than other bots that have existed for years.

    • digdilem@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 month ago

      In my experience, the AI bots are absolutely not honoring robots.txt - and there are literally hundreds of unique ones. Everyone and their dog has unleashed AI/LLM harvesters over the past year without much thought to the impact to low bandwidth sites.

      Many of them aren’t even identifying themselves as AI bots, but faking human user-agents.

  • breadsmasher@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 month ago

    It isn’t an enforceable solution. robots.txt and similar are just please bots dont index these pages. Doesn’t mean any bots will respect it

  • Cynicus Rex@lemmy.mlOP
    link
    fedilink
    arrow-up
    11
    arrow-down
    3
    ·
    1 month ago

    #TL;DR:

    User-agent: GPTBot
    Disallow: /
    User-agent: ChatGPT-User
    Disallow: /
    User-agent: Google-Extended
    Disallow: /
    User-agent: PerplexityBot
    Disallow: /
    User-agent: Amazonbot
    Disallow: /
    User-agent: ClaudeBot
    Disallow: /
    User-agent: Omgilibot
    Disallow: /
    User-Agent: FacebookBot
    Disallow: /
    User-Agent: Applebot
    Disallow: /
    User-agent: anthropic-ai
    Disallow: /
    User-agent: Bytespider
    Disallow: /
    User-agent: Claude-Web
    Disallow: /
    User-agent: Diffbot
    Disallow: /
    User-agent: ImagesiftBot
    Disallow: /
    User-agent: Omgilibot
    Disallow: /
    User-agent: Omgili
    Disallow: /
    User-agent: YouBot
    Disallow: /