More

    Google Wants to Establish an Official Standard for Using Robots.txt

    Google has proposed an official internet standard for the rules included in robots.txt files.

    Those rules, outlined in the Robots Exclusion Protocol (REP), have been an unofficial standard for the past 25 years.

    While the REP has been adopted by search engines it’s still not official, which means it’s open to interpretation by developers. Further, it has never been updated to cover today’s use cases.

    Google Webmasters

    @googlewmc

    It’s been 25 years, and the Robots Exclusion Protocol never became an official standard. While it was adopted by all major search engines, it didn’t cover everything: does a 500 HTTP status code mean that the crawler can crawl anything or nothing? 😕

    57 people are talking about this

    As Google says, this creates a challenge for website owners because the ambiguously written, de-facto standard made it difficult to write the rules correctly.

    To eliminate this challenge, Google has documented how the REP is used on the modern web and submitted it to the Internet Engineering Task Force (IETF) for review.

     

    Google explains what is included in the draft:

    “The proposed REP draft reflects over 20 years of real world experience of relying on robots.txt rules, used both by Googlebot and other major crawlers, as well as about half a billion websites that rely on REP. These fine grained controls give the publisher the power to decide what they’d like to be crawled on their site and potentially shown to interested users.”

    The draft does not change any of the rules established in 1994, it’s just updated for the modern web.

    Some of the updated rules include:

    • Any URI based transfer protocol can use robots.txt. It’s not limited to HTTP anymore. Can be used for FTP or CoAP as well.
    • Developers must parse at least the first 500 kibibytes of a robots.txt.
    • A new maximum caching time of 24 hours or cache directive value if available, which gives website owners the flexibility to update their robots.txt whenever they want.
    • When a robots.txt file becomes inaccessible due to server failures, known disallowed pages are not crawled for a reasonably long period of time.

    Google is fully open to feedback on the proposed draft and says it’s committed to getting it right.

    Recent Articles

    iOS 13 Jailbreak for 13.2.2 with Checkra1n

    Jailbreak iOS 13 for A11 and Lower Downloads HERE (BOOKMARK) an iOS 13 jailbreak for the iPhone X (A11 CPU) and lower has now released!...

    Google patches ‘awesome’ XSS vulnerability in Gmail dynamic email feature

    The bug bounty hunter who disclosed the issue says the bug is a prime example of DOM Clobbering. Google has resolved an XSS vulnerability in...

    A one-year subscription to Adobe Creative Cloud apps is 40 percent off for Black Friday

    Save up to $240 on a one-year subscription Adobe is ringing in Black Friday early with deals on all of its Creative Cloud apps for...

    ‘Watchmen’: Everything you need to know from the comic after Episode 5

    The fifth chapter of HBO's Watchmen takes us right to the end of the comic and reveals how it all happened. Watchmen is much more of a...

    Samsung to release third-party made Galaxy phones in other markets

    Samsung launched its first ODM smartphone, the Galaxy A6s, in China last year. The ODM classification means that this device was not manufactured by...

    Latest Stories

    Stay on op - Ge the daily news in your inbox