Cloudflare.com's Robots.txt

(cloudflare.com)

145 points | by sans_souse 8 months ago ago

44 comments

  • seanwilson 8 months ago

    I have an ASCII art Easter egg like this in an SEO product I made. :)

    https://www.checkbot.io/robots.txt

    I should probably add this SEO tip too because the purpose of robots.txt is confusing: If you want to remove/deindex a page from Google search, you counterintuitively need to allow the page to be crawled in the robots.txt file, and then add a noindex response header or noindex meta tag to the page. This way the crawler gets to see the noindex instruction. Robots.txt controls which pages can be crawled, not which pages can be indexed.

    • dazc 8 months ago

      The consequences of robots.txt misuse can also be disastrous for a regular site. For example, I've seen instances where multiple warnings of 'page indexed but blocked by robots.txt' have led to sites being severely down-ranked as a consequence.

      My assumption being that search engines don't want to be listing too many pages that everyone can read and they can not.

  • palsecam 8 months ago

    That’s a funny one!

    Anyone knows of others like that?

    Here is mine: https://FreeSolitaire.win/robots.txt

  • jsheard 8 months ago

    This is what happens if your robot isn't nice

      > curl -I -H "User-Agent: Googlebot" https://www.cloudflare.com
      HTTP/2 403
  • m-app 8 months ago

    What does “OUR TREE IS A REDWOOD” refer to? A quick search doesn’t yield any definite results.

    • dlevine 8 months ago

      California’s state tree is the redwood, and that’s where their HQ is.

      • m-app 8 months ago

        Right, that makes sense. But why would you mention your state’s tree anywhere, and why specifically in your robots.txt? Seems pretty random.

        • judge2020 8 months ago

          State pride I suppose.

        • NewJazz 8 months ago

          Have you seen a redwood? They can create quite the impression amongst people.

        • SllX 8 months ago

          Redwoods are awesome.

      • ccorcos 8 months ago

        The tree shape a fairly inaccurate though

  • chrisweekly 8 months ago

    One nice thing about CF's robots.txt is its inclusion of a sitemap:

    https://www.cloudflare.com/sitemap.xml

    which contains links to educational materials like

    https://www.cloudflare.com/learning/ddos/layer-3-ddos-attack...

    Potentially interesting to see their flattened IA....

    • palsecam 8 months ago

      Little-known fact: a syndication feed (RSS or Atom) can be used as a sitemap.

      Quoting https://www.sitemaps.org/protocol.html#otherformats:

      > The Sitemap protocol enables you to provide details about your pages to search engines, […] in addition to the XML protocol, we support RSS feeds and text files, which provide more limited information.

      > You can provide an RSS (Real Simple Syndication) 2.0 or Atom 0.3 or 1.0 feed. Generally, you would use this format only if your site already has a syndication feed.

  • yapyap 8 months ago

    That’s cool, if any scrapers would still respect the robots.txt that is

    • bityard 8 months ago

      Think of robots.txt as less of a no trespassing sign and more of a, "You can visit but here are the rules to follow if you don't want to get shot" sign.

      • iterance 8 months ago

        If you do not respect the sign I shall be very cross with you. Very cross indeed. Perhaps I shall have to glare at you, yes, very hard. I think I shall glare at you. Perhaps if you are truly irritating I shall be forced to remove you from the premises for a bit.

      • blacksmith_tb 8 months ago

        There's a lot of talk of deregulation in the air, maybe we'll see Gibson-esque Black Ice, where rude crawlers provoke an automated DoS, a new Wild West.

    • marginalia_nu 8 months ago

      They may or may not, though respecting robots.txt is a nice way of not having your IP range end up on blacklists. With cloudflare in particular, that can be a bit of a pain.

      They're pretty nice to deal with if you're upfront about what you are doing and clearly identify your bot, as well as register it with their bot detection. There's a form floating around somewhere for that.

    • andrethegiant 8 months ago

      FWIW, that’s why I’m working on a platform[1] to help devs deploy polite crawlers and scrapers out of the box that respect robots.txt (and 429s, Retry-After response headers, etc). It also happens to be entirely built on Cloudflare.

      [1] https://crawlspace.dev

    • dartos 8 months ago

      I was surprised any ever did, honestly

  • CodesInChaos 8 months ago

    What's the purpose of "User-Agent: DemandbaseWebsitePreview/0.1"? I couldn't find anything about that agent, but I assume it's somehow related to demandbase.com?

    But why are it and twitter the only whitelisted entries? Google and bing missing is a bit surprising, but I assume they're whitelisted through a different mechanism (like a google webmaster account)?

    • saddist0 8 months ago

      It is one of the service they use. As per the cookie policy page [1]:

      > DemandBase - Enables us to identify companies who intend to purchase our products and solutions and deliver more relevant messages and offers to our Website visitors.

      [1]: https://www.cloudflare.com/en-in/cookie-policy/

    • Maken 8 months ago

      My guess is that the Twitter one is for previews when you link to a web in Twitter.

  • op00to 8 months ago

    If those robots could read, they'd be very upset.

  • ck2 8 months ago

    easy guess that length breaks some legacy stuff

    but every robots.txt should have a auto-ban trap line

    ie. crawl it and die

    basically a script that puts the requesting IP into firewall

    of course it's possible to abuse that so it has to be monitored

    • johneth 8 months ago

      I thought about doing something like that, but then I realised: what if someone linked to the trap URL it from another site and a crawler followed that link to the trap?

      You might end up penalising Googlebot or Bingbot.

      If anyone knew what that trap URL did, and felt malicious, this could happen.

      • CodesInChaos 8 months ago

        A crawler could easily avoid that by fetching the target domain's robots.txt before fetching the link target. However a website could also embed the honeypot link in an <img> tag and get the user banned when their browser attempts to load the image.

    • okdood64 8 months ago

      How do you discern a crawler agent and a human? Is it easily as the fact that they might cover something like 80%+ of the site in one visit fairly quickly?

      • SoftTalker 8 months ago

        Crawlers/archivers will be hitting your site much faster than a human user.

  • orliesaurus 8 months ago

    Has anyone worked on anything like this for AI scrapers?

  • sandworm101 8 months ago

    [flagged]