Wayback Machine

Improbable History

The Wayback Machine is a digital archive of the World Wide Web and other information on the Internet created by the Internet Archive, a non-profit organization, based in San Francisco, California. It was set up by Brewster Kahle and Bruce Gilliat, co-founders of Alexa Internet, which provides commercial web traffic data, and is maintained with content from Alexa (currently a subsidiary of Amazon.com). The service enables users to see archived versions of web pages across time, which the Archive calls a ‘three dimensional index.’

The name Wayback Machine was chosen as a droll reference to a plot device in an animated cartoon series, ‘The Rocky and Bullwinkle Show.’ In it, Mr. Peabody and Sherman routinely used a time machine called the ‘WABAC machine’ (pronounced ‘Wayback’) to witness, participate in, and, more often than not, alter famous events in history.

In 1996 Kahle, with Gilliat, developed software to crawl and download all publicly accessible World Wide Web pages, the Gopher hierarchy, the Netnews bulletin board system, and downloadable software. The information collected by these ‘crawlers’ does not include all the information available on the Internet, since much of the data is restricted by the publisher or stored in databases that are not accessible. These “crawlers” also respect the robots exclusion standard for websites whose owners opt for them not to appear in search results or be cached. To overcome inconsistencies in partially cached websites, Archive-It.org was developed in 2005 by the Internet Archive as a means of allowing institutions and content creators to voluntarily harvest and preserve collections of digital content, and create digital archives.

Information was kept on digital tape for five years, with Kahle occasionally allowing researchers and scientists to tap into the clunky database. When the archive reached its five-year anniversary, it was unveiled and opened to the public in a ceremony at the University of California-Berkeley. Snapshots usually become available more than 6 months after they are archived or in some cases even later, 24 months or longer. The frequency of snapshots is variable, so not all tracked web site updates are recorded. There are sometimes intervals of several weeks or years between snapshots.

After August 2008 sites had to be listed on the Open Directory (a hierarchical ontology scheme for organizing site listings) in order to be included. According to Jeff Kaplan of the Internet Archive in November 2010, other sites were still being archived, but more recent captures would only become visible after the next major indexing, an infrequent operation. As of 2009 the Wayback Machine contained approximately three petabytes of data and was growing at a rate of 100 terabytes each month; the growth rate reported in 2003 was 12 terabytes/month. The data is stored on PetaBox rack systems manufactured by Capricorn Technologies.

In 2009 the Internet Archive migrated its customized storage architecture to Sun Open Storage, and hosts a new data center in a Sun Modular Datacenter on Sun Microsystems’ California campus. In 2011 a new, improved version of the Wayback Machine, with an updated interface and fresher index of archived content, was made available for public testing. In January 2013 the company announced a ground-breaking milestone of 240 billion URLs.

In a 2009 case Netbula, LLC v. Chordiant Software Inc., defendant Chordiant filed a motion to compel Netbula to disable the robots.txt file on its web site that was causing the Wayback Machine to retroactively remove access to previous versions of pages it had archived from Nebula’s site, pages which Chordiant believed would support its case. Netbula objected to the motion on the ground that defendants were asking to alter Netbula’s web site and that they should have subpoenaed Internet Archive for the pages directly. However, an employee of Internet Archive filed a sworn statement supporting Chordiant’s motion, stating that it could not produce the web pages by any other means ‘without considerable burden, expense and disruption to its operations.’ Magistrate Judge Howard Lloyd in the Northern District of California, San Jose Division, rejected Netbula’s arguments and ordered them to temporarily disable the robots.txt blockage in order to allow Chordiant to retrieve the archived pages that they sought.

The United States patent office and the European Patent Office, provided some additional requirements are met (e.g. providing an authoritative statement of the archivist), will accept date stamps from the Internet Archive as evidence of when a given Web page was accessible to the public. These dates are used to determine if a Web page is available as prior art for instance in examining a patent application.

There are technical limitations to archiving a website, and as a consequence, it is possible for opposing parties in litigation to misuse the results provided by website archives. This problem can be exacerbated by the practice of submitting screen shots of web pages in complaints, answers or expert witness reports, when the underlying links are not exposed and therefore can contain errors. For example, archives like the Wayback Machine do not fill out forms and therefore do not include the contents of non-RESTful e-commerce databases in their archives.

In Europe the Wayback Machine could be interpreted to violate copyright laws. Only the content creator can decide where their content is published or duplicated, so the Archive would have to delete pages from its system upon request of the creator. The exclusion policies for the Wayback Machine can be found in the FAQ section of the site. The Wayback Machine also retroactively respects robots.txt files, i.e., pages which are currently blocked to robots on the live web will be made temporarily unavailable from the archives as well.

A number of cases have been brought against the Internet Archive specifically for its Wayback Machine archiving efforts. In late 2002, the Internet Archive removed various sites critical of Scientology from the Wayback Machine. The error message stated that this was in response to a ‘request by the site owner.’ It was later clarified that lawyers from the Church of Scientology had demanded the removal and that the actual site owners did not want their material removed.

Robots.txt is used as part of the Robots Exclusion Standard, a voluntary protocol the Internet Archive respects that disallows bots from indexing certain pages delineated by the creator as off-limits. As a result, the Internet Archive has rendered unavailable a number of websites that are now inaccessible through the Wayback Machine. Currently, the Internet Archive applies robots.txt rules retroactively; if a site blocks the Internet Archive any previously archived pages from the domain are also rendered unavailable. In cases of blocked sites, only the robots.txt file is archived. 

However, the Internet Archive also states, ‘Sometimes a web site owner will contact us directly and ask us to stop crawling or archiving a site. We comply with these requests.’ In addition, the website says: ‘The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents of persons who do not want their materials in the collection.’

Tags:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.