web crawler

Latest

  • AP Photo/Mark Lennihan

    Google pushes for an official web crawler standard

    by 
    Jon Fingas
    Jon Fingas
    07.01.2019

    One of the cornerstones of Google's business (and really, the web at large) is the robots.txt file that sites use to exclude some of their content from the search engine's web crawler, Googlebot. It minimizes pointless indexing and sometimes keeps sensitive info under wraps. Google thinks its crawler tech can improve, though, and so it's shedding some of its secrecy. The company is open-sourcing the parser used to decode robots.txt in a bid to foster a true standard for web crawling. Ideally, this takes much of the mystery out of how to decipher robots.txt files and will create more of a common format.