With everything that’s happening there, I was wondering if it was possible. Obviously their size is massive, but I’m sure there’s a ton of duplicated stuff. Also some things are more important to preserve than others, and some things are preserved elsewhere (Anna’s Archive, Libgen, and Z-Lib come to mind that could preserve books if the IA disappeared).
But how could things get archived from the IA (assuming it’s possible) on both a personal level (aka I want to grab a copy of that wayback snapshot) and on a more wide scale community level? Are there people already working on it? If not, what would be the best theoretical way to get started?
And what are the most important things in your opinion that should be prioritized if the IA was about to disappear and we only had so much time and storage to utilize?
the „archiveteam“ tried this, but it failed, you can read about it here:
What are the conclusions of the research? Why was it shut down?
I mean unless you’re sitting on an exabyte of spare storage you don’t know what to do with it’s a pretty hefty undertaking.
I think that something like the internet archive – where the body of data is too large and important to store in one place – is where using a federated framework similar to Lemmy might make a lot of sense. What’s more, there are many different organisations which have the incentive to archive their own little slice of the internet (but not those of others), and a federated model would help in linking these up into one easily navigable, and inherently crowd-funded, whole.
Why federated and not just regular p2p?
Internet archive already supports torrents.
its BIG. could be great to see some different teams tackle different issues.
for example a transcode team to tag and convert different media to the latest efficient formats might save alot of space.
and eg. voice-only recordings could be suitably encoded vs music etc
also some methods for diffing snapshots, or some kind of compromise on snapshots storage with minimal changes? not ideal but might be enough to get across the line maybe?
re. the “most important”, aside from specific items or archives, imo a crucial role might be text-only snapshots of most of the web. would help increase accountability amongst modern media outlets, journalists etc
As others have and will say, it’s an enormous body of content. And this has sparked a shower thought.
What about not trying to be a full, perfect backup, but instead a “best effort”/“better than no backup at all” shoestring budget backup? What about triage backup? What about stripped-down markup? What about lossy text compression?
Archive Team looked at this about 10 years ago and found it basically impossible. It was around 14 petabytes of information to fetch, organize, and distribute at the time.
just torrent everything and create little p2p servers :P