archiver: Archive supplied URLs in WebCite & Internet Archive

[ bsd3, documentation, library, network, program ] [ Propose Tags ] [ Report a vulnerability ]

archiver is a daemon which will process a specified text file, each line of which is a URL, and will (randomly) one by one request that the URLs be archived or spidered by http://www.webcitation.org, http://www.archive.org, and http://www.wikiwix.com for future reference. (One may optionally specify an arbitrary sh command like wget to download URLs locally.)

Because the interface is a simple text file, this can be combined with other scripts; for example, a script using Sqlite to extract visited URLs from Firefox, or a program extracting URLs from Pandoc documents. (See http://www.gwern.net/Archiving%20URLs.)

For explanation of the derivation of the code in Network.URL.Archiver, see http://www.gwern.net/haskell/Wikipedia%20Archive%20Bot.

Modules

[Index]

Downloads

Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees

Candidates

  • No Candidates
Versions [RSS] 0.1, 0.2, 0.3, 0.3.1, 0.4, 0.5, 0.5.1, 0.6.0, 0.6.1, 0.6.2, 0.6.2.1
Dependencies base (>=4 && <5), bytestring, containers, curl, HTTP, network, process, random [details]
Tested with ghc ==6.12.1
License BSD-3-Clause
Author Gwern
Maintainer Gwern <gwern@gwern.net>
Category Documentation, Network
Source repo head: git clone git://github.com/gwern/archiver-bot.git
Uploaded by GwernBranwen at 2014-01-03T19:44:15Z
Distributions
Reverse Dependencies 1 direct, 0 indirect [details]
Executables archiver
Downloads 9231 total (29 in the last 30 days)
Rating (no votes yet) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs available [build log]
Successful builds reported [all 1 reports]