Robots.txt 167
Disallow: / # keep them out
Example demonstrating how to add the parameter to tell bots where the Sitemap is located
User-agent: *
Sitemap: http://www.example.com/sitemap.xml # tell the bots where your sitemap is located
Nonstandard extensions
Crawl-delay directive
Several major crawlers support a Crawl-delay parameter, set to the number of seconds to wait between
successive requests to the same server:[4][5][6]
User-agent: *
Crawl-delay: 10
Allow directive
Some major crawlers support an Allow directive which can counteract a following Disallow directive.[7] [8]
This is useful when one tells robots to avoid an entire directory but still wants some HTML documents in that
directory crawled and indexed. While by standard implementation the first matching robots.txt pattern always wins,
Google's implementation differs in that Allow patterns with equal or more characters in the directive path win over a
matching Disallow pattern.[9] Bing uses the Allow or Disallow directive which is the most specific.[10]
In order to be compatible to all robots, if one wants to allow single files inside an otherwise disallowed directory, it
is necessary to place the Allow directive(s) first, followed by the Disallow, for example:
Allow: /folder1/myfile.html
Disallow: /folder1/
This example will Disallow anything in /folder1/ except /folder1/myfile.html, since the latter will match first. In case
of Google, though, the order is not important.
Sitemap
Some crawlers support a Sitemap directive, allowing multiple Sitemaps in the same robots.txt in the form:[11]
Sitemap: http://www.gstatic.com/s2/sitemaps/profiles-sitemap.xml
Sitemap: http://www.google.com/hostednews/sitemap_index.xml
Universal "*" match
the Robot Exclusion Standard does not mention anything about the "*" character in the Disallow: statement.
Some crawlers like Googlebot and Slurp recognize strings containing "*", while MSNbot and Teoma interpret it in
different ways.[12]