googlebot

Latest

  • AP Photo/Mark Lennihan

    Google pushes for an official web crawler standard

    by 
    Jon Fingas
    Jon Fingas
    07.01.2019

    One of the cornerstones of Google's business (and really, the web at large) is the robots.txt file that sites use to exclude some of their content from the search engine's web crawler, Googlebot. It minimizes pointless indexing and sometimes keeps sensitive info under wraps. Google thinks its crawler tech can improve, though, and so it's shedding some of its secrecy. The company is open-sourcing the parser used to decode robots.txt in a bid to foster a true standard for web crawling. Ideally, this takes much of the mystery out of how to decipher robots.txt files and will create more of a common format.

  • Google bots learning to read webpages like humans, one step closer to knowing everything

    by 
    Sarah Silbert
    Sarah Silbert
    05.17.2012

    Google just launched its Knowledge Graph, a tool intended to deliver more accurate information by analyzing the way users search. Of course, with a desire to provide better search results comes a need for improved site-reading capabilities. JavaScript and AJAX have traditionally put a wrench in Google bots' journey through a webpage, but it looks like the search engine has developed some smarter specimens. While digging through Apache logs, a developer spotted evidence that bots now execute the JavaScript they encounter -- and rather than just mining for URLS, the crawlers seem to be mimicking how users click on objects to activate them. That means bots can dig deeper into the web, accessing databases and other content that wasn't previously indexable. Looks like Google is one step closer to success on its quest to know everything.