More

    Google Wants to Establish an Official Standard for Using Robots.txt

    Google has proposed an official internet standard for the rules included in robots.txt files.

    Those rules, outlined in the Robots Exclusion Protocol (REP), have been an unofficial standard for the past 25 years.

    While the REP has been adopted by search engines it’s still not official, which means it’s open to interpretation by developers. Further, it has never been updated to cover today’s use cases.

    Google Webmasters

    @googlewmc

    It’s been 25 years, and the Robots Exclusion Protocol never became an official standard. While it was adopted by all major search engines, it didn’t cover everything: does a 500 HTTP status code mean that the crawler can crawl anything or nothing? 😕

    57 people are talking about this

    As Google says, this creates a challenge for website owners because the ambiguously written, de-facto standard made it difficult to write the rules correctly.

    To eliminate this challenge, Google has documented how the REP is used on the modern web and submitted it to the Internet Engineering Task Force (IETF) for review.

     

    Google explains what is included in the draft:

    “The proposed REP draft reflects over 20 years of real world experience of relying on robots.txt rules, used both by Googlebot and other major crawlers, as well as about half a billion websites that rely on REP. These fine grained controls give the publisher the power to decide what they’d like to be crawled on their site and potentially shown to interested users.”

    The draft does not change any of the rules established in 1994, it’s just updated for the modern web.

    Some of the updated rules include:

    • Any URI based transfer protocol can use robots.txt. It’s not limited to HTTP anymore. Can be used for FTP or CoAP as well.
    • Developers must parse at least the first 500 kibibytes of a robots.txt.
    • A new maximum caching time of 24 hours or cache directive value if available, which gives website owners the flexibility to update their robots.txt whenever they want.
    • When a robots.txt file becomes inaccessible due to server failures, known disallowed pages are not crawled for a reasonably long period of time.

    Google is fully open to feedback on the proposed draft and says it’s committed to getting it right.

    Recent Articles

    Google’s BERT Rolls Out Worldwide via @martinibuster

    Google announced that BERT is rolling out in 72 languages worldwide.The post Google’s BERT Rolls Out Worldwide via @martinibuster appeared first on Techevangelist Seo.

    Snapchat launching deepfake ‘Cameo’ feature this month for editing your face into GIFs

    Snapchat is taking its filters and face tracking features to the next level later this month. TechCrunch reports that Snapchat will launch a new...

    Apple is heading to CES for the first time in decades to talk privacy

    Apple is crashing CES officially this year. What you need to know Apple is attending CES for the first time in decades. The company's Senior Director of...

    Elon Musk gives new details regarding Tesla’s e-ATV

    The Cybertruck is set to be available in 2021, meaning the ATV’s release should be around that same time. Read more...More about Tech, Cars,...

    Galaxy A (2020) series could employ MediaTek’s high-end 5G solution

    Samsung’s Galaxy A (2020) lineup, or at least some models from the upcoming series, could be equipped with MediaTek’s Dimensity 1000 5G chipset, according...

    Latest Stories

    Stay on op - Ge the daily news in your inbox