Save Our Stories
November 30, 2017 12:10 PM   Subscribe

See also:, an "archive this thing now" service that captures webpages in various ways, but doesn't get everything (per their FAQ); but the service also allows for searching of website archives with wild cards.

Another (automated) archival tool, but for 3rd parties to review the changes to news stories:, born of the Knight Mozilla MIT hackathon on June 17, 2012 (per their About page).
posted by filthy light thief at 12:49 PM on November 30, 2017

I have text files full of obscure links, be nice if I could upload them in-bulk. Can't find a definite "schedule this URL for indexing" API though.
posted by RobotVoodooPower at 1:06 PM on November 30, 2017

See also, provided you pay $25 per year for the archiving option (previously)
posted by TwoToneRow at 1:15 PM on November 30, 2017

I'm curious if ipfs offers a solution to this. It's a pretty cool technology, designed to make it easy to cache files on multiple machines and keep them accessible. It'd be cool to just have a "Mirror on IPFS" extension.
posted by ikea_femme at 2:00 PM on November 30, 2017

IPFS seems architecturally similar to Freenet, which I always thought was neat, but rather predictably turned into a cesspool, as all uncensored corners of the Internet tend to do. Curious how they curtail the sort of behavior that was always so discouraging of having people run FN nodes.
posted by Kadin2048 at 3:38 PM on November 30, 2017

Ok, here's one way to archive all the links in a text file, I think:
egrep -o "http[s]?://([^ ]+)" links.txt | xargs -I {} curl "{}"
posted by RobotVoodooPower at 3:58 PM on November 30, 2017 [2 favorites]

« Older Sur-prise, sur-prise!   |   You won't read this article either Newer »

This thread has been archived and is closed to new comments