Advanced Copyright Law on the Internet

(National Geographic (Little) Kids) #1

  1. Cases Adjudicating Caching Under the Fair Use and Implied License
    Doctrines


(a) Field v. Google

In Field v. Google^1545 the plaintiff, Field, alleged that by allowing Internet users to access
copies of his copyrighted works stored by Google in its online cache, Google was violating his
exclusive rights to reproduce and distribute copies of those works. The court ruled that Google’s
acts were covered by the fair use and implied license doctrines.


The challenged acts arose in the context of Google’s search engine and its accompanying
Web crawler, the Googlebot. The Googlebot automatically and continuously crawled the
Internet to locate and analyze Web pages and to catalog those pages into Google’s searchable
Web index. As part of the process, Google made and analyzed a copy of each Web page the
Googlebot found and stored the HTML code from those pages in a cache so as to enable those
pages to be included in the search results displayed to users in response to search queries. When
Google displayed Web pages in its search results, the first item appearing was the title of a Web
page which, if clicked, would take the user to the online location of that page. The title was
followed by a short snippet of text from the Web page in a smaller font. Following the snippet,
Google typically provided the full URL for the page. Then, in the same smaller font, Google
often displayed another link labeled “Cached.” When clicked, the “Cached” link directed a user
to the archival copy of a Web page stored in Google’s system cache, rather than to the original
Web site for that page. By clicking on the “Cached” link for a page, a user could view the
snapshot of that page as it appeared the last time the site was visited and analyzed by the
Googlebot.^1546


The court noted that Google provided “Cached” links for three principal reasons – to
allow viewing of archival copies of pages that had become inaccessible because of transmission
problems, censorship, or because too many users were trying to access the content at a particular
time; to enable users to make Web page comparisons to determine how a particular page had
been altered over time; and to enable users to determine the relevance of a page by highlighting
where the user’s search terms appeared on the cached copy of the page.^1547


Of particular relevance to the court’s rulings were certain widely recognized and well
publicized standard protocols that the Internet industry had developed by which Web site owners
could automatically communicate their preferences to search engines such as Google. The first
mechanism was the placement of meta-tags within the HTML code comprising a given page to
instruct automated crawlers and robots whether or not the page should be indexed or cached. For
example, a “NOINDEX” tag would indicate an instruction that the Web page in which it was
embedded should not be indexed into a search engine, and a “NOARCHIVE” tag would indicate


(^1545) 412 F. Supp. 2d 1106 (D. Nev. 2006).
(^1546) Id. at 1110-11.
(^1547) Id. at 1111-12.

Free download pdf