Digital Marketing Handbook

(ff) #1

Robots Exclusion Standard 164


Nonstandard extensions


Crawl-delay directive


Several major crawlers support a Crawl-delay parameter, set to the number of seconds to wait between
successive requests to the same server:[5][6][7]

User-agent: *
Crawl-delay: 10

Allow directive


Some major crawlers support an Allow directive which can counteract a following Disallow directive.[8] [9]
This is useful when one tells robots to avoid an entire directory but still wants some HTML documents in that
directory crawled and indexed. While by standard implementation the first matching robots.txt pattern always wins,
Google's implementation differs in that Allow patterns with equal or more characters in the directive path win over a
matching Disallow pattern.[10] Bing uses the Allow or Disallow directive which is the most specific.[11]
In order to be compatible to all robots, if one wants to allow single files inside an otherwise disallowed directory, it
is necessary to place the Allow directive(s) first, followed by the Disallow, for example:

Allow: /folder1/myfile.html
Disallow: /folder1/

This example will Disallow anything in /folder1/ except /folder1/myfile.html, since the latter will match first. In case
of Google, though, the order is not important.

Sitemap


Some crawlers support a Sitemap directive, allowing multiple Sitemaps in the same robots.txt in the form:[12]


Sitemap: http://www.gstatic.com/s2/sitemaps/profiles-sitemap.xml
Sitemap: http://www.google.com/hostednews/sitemap_index.xml

Universal "*" match


the Robot Exclusion Standard does not mention anything about the "*" character in the Disallow: statement.
Some crawlers like Googlebot and Slurp recognize strings containing "*", while MSNbot and Teoma interpret it in
different ways.[13]

References
[ 1 ]http:/ / http://www. robotstxt. org/ orig. html#status
[ 2 ]http:/ / http://www. robotstxt. org/ norobots-rfc. txt
[ 3 ]http:/ / http://www. youtube. com/ watch?v=KBdEwpRQRD0#t=196s
[ 4 ]http:/ / http://www. robotstxt. org/ wc/ norobots. html
[ 5 ]"How can I reduce the number of requests you make on my web site?" (http:/ / help. yahoo. com/ l/ us/ yahoo/ search/ webcrawler/ slurp-03.
html). Yahoo! Slurp.. Retrieved 2007-03-31.
[ 6 ]"MSNBot is crawling a site too frequently" (http:/ / search. msn. com/ docs/ siteowner.
aspx?t=SEARCH_WEBMASTER_FAQ_MSNBotIndexing. htm& FORM=WFDD#D). Troubleshoot issues with MSNBot and site crawling..
Retrieved 2007-02-08.
[ 7 ]"About Ask.com: Webmasters" (http:/ / about. ask. com/ en/ docs/ about/ webmasters. shtml#15)..
[ 8 ]"Webmaster Help Center - How do I block Googlebot?" (http:/ / http://www. google. com/ support/ webmasters/ bin/ answer. py?hl=en&
answer=156449& from=40364).. Retrieved 2007-11-20.
[ 9 ]"How do I prevent my site or certain subdirectories from being crawled? - Yahoo Search Help" (http:/ / help. yahoo. com/ l/ us/ yahoo/
search/ webcrawler/ slurp-02. html).. Retrieved 2007-11-20.
Free download pdf