Internet Archive Services are "temporarily offline"

(archive.org)

101 points | by pushedx 2 days ago ago

33 comments

  • jlund-molfese 2 days ago
  • timonoko a day ago

    What we really need right now is "black hole" of information. A place where you can push stuff, but retrieving is impossible until that time when ironclad legitimation can be automated.

    Insane that only examples of me myself ever doing anything is in printed copies from 1970's. In National Archives where some aunties still believe that Internet is just a passing fad.

  • userbinator 2 days ago

    This incident brings up a good point: Who archives the archives?

    • divbzero 2 days ago

      There have been collaborative computing projects like SETI@home [1] and Folding@Home [2] where unused computing power could be used for productive purposes. Could there be something similar for storage? Software that provides unused storage for Internet archiving? In the best case scenario, we could have redundant backups of the Internet Archive distributed around the world.

      [1]: https://setiathome.berkeley.edu/

      [2]: https://foldingathome.org/

      • boomlinde 2 days ago

        Perhaps torrents?

        archive.org does use torrents and I have one such torrent laying around in my client, which occasionally connects to peers although the the trackers are currently offline. I suppose a new client would find me and other peers through the DHT. I'd share a magnet link for someone to try, but it's a copyright-ignoring ROM dump archive so it may not be the best idea to post it here.

        It's interesting that torrents may not be the first thing that comes to mind. They have the "PR issue" of being the now seemingly mundane way in which we've been downloading DVD rips for the last 20 odd years. Newer technology like IPFS does a better job making the cool core of this technology actually sound cool.

        • anacrolix a day ago
        • divbzero a day ago

          I didn’t know that archive.org already has torrents. I guess what we would need, on top of that, is a system for assigning those torrents to new peers.

          • boomlinde 20 hours ago

            That already exists. Peers find each other through a distributed hash table which can be bootstrapped from a variety of sources.

            I would say the problem is discoverability and actual deployment.

            For suddenly popular files it can be a way to donate bandwidth most of all, because then there might suddenly be a lot of peers. For the vast majority of files however, there won't be any other peers and they have to be web seeded by Archive.org either way.

            Then there's the discoverability problem. Ultimately you need something like a magnet link to connect to a swarm.

      • anacrolix a day ago
      • binaryroof 2 days ago

        The vision behind IPFS is that (to an extent) https://ipfs.tech/

        • hypercube33 2 days ago

          IPFS on the tin seems pretty awesome however when I attempted to dig into it for an hour I still had no idea how to actually do anything with it. Their usability needs to go a long way before I give it another go. it's definitely not a two step process where you download a client and click on a link to start load sharing an archive in my past experience.

      • odo1242 2 days ago

        There is currently ArchiveTeam going on

    • JKCalhoun 2 days ago

      r/DataHoarder

      (or r/archiveteam ?)

      Personally, I have archived a few of the magazine collections.

      • notpushkin 2 days ago
        • chambers 2 days ago

          > The INTERNETARCHIVE.BAK experiment has come to a close a number of years ago.

          > Much was learned in the process, and many thanks are given to the dozens of people who donated time, space and coding efforts to make the system work as long as it did. A number of useful facts and observations came from the project.

          > The Internet Archive continues to explore methods and code to decentralize the collection, to have a mirror running in various ways - these include IPFS, FileCoin, and others. The INTERNETARCHIVE.BAK project also added general mirroring and tracking code to a number of projects that are still in use.

          IA called this their Postmortem, but it sounds... intentionally opaque. Also, I'm not sure if this website is affiliated with archive.org, since they say at the bottom of their homepage:

          > Archive Team is in no way affiliated with the fine folks at ARCHIVE.ORG Archive Team can always be reached by e-mail at archiveteam@archiveteam.org or by IRC at the channel #archiveteam (on hackint).

          • notpushkin 2 days ago

            Yeah, it’s a completely [1] separate team (they do run a bunch of archiving projects that end up in the IA / Wayback Machine though). Just wanted to share – it’s sad there isn’t much more info though apart from some code; maybe worth looking into IRC logs?

            [1]: On paper, at least; the founder, Jason Scott, seems pretty involved with the IA as well, and I’m not really sure how much the teams intersect.

            • textfiles 2 days ago

              The co-founder, Jason Scott, retired from Archive Team years ago and stays around as a cheeleader and advisor. He is employed by Internet Archive.

              • notpushkin 2 days ago

                Must be a busy guy, fancy seeing him here. (Thanks for all the great work!)

    • Sakos 2 days ago

      I really wish the EU would have their own organisation for creating an internet archive that at the very minimum mirrored IA. This is our history and there's only a single place now that has any significant archive of it. It seems like the EU should have a significant interest in preserving it for generations to come.

  • keepamovin 2 days ago

    how vulnerable is IA to some malicious actor who wanted to rewrite history or run an 'information cleansing' operation?

    - take offline

    - purge 'problematic' archives

    - return to service

    is that impossible? are there redundancies to make this very hard?

    • cookiengineer 2 days ago

      Don't give the SVR any ideas, man.

      The problem that multi generational projects like this always have is tech debt. Any library/dependency chosen by the previous generation might be unmaintained for decades until it falls through the cracks and someone notices it.

      Heretrix, for example, was written in a very old "Java way" to do it. They have also lots of services that were built in the PHP4 age, with globals by default and stuff like that.

      Always keep in mind that whatever you choose, it's a bet, essentially. Over time you'll realize that different language ecosystems have different aligned or misaligned goals to your project. Don't choose libraries because of hype, choose them because of maintainability.

      • Apocryphon 2 days ago

        I dunno about the state actor hypothesis, but if there is, it all sounds like Charles Stross's description of future cold war in Halting State:

        > "And that's the twentieth-century model, what they used to call an electronic Pearl Habour. Things have moved on since then. Footnotes inserted in government reports feeding into World Trade Organization negotiating positions. Nothing we'd notice at first, nothing that would be obvious for a couple of years. You don't want to halt the state in its tracks, you simply want to divert it into a sliding of your choice."

        Who knows what will appear after the archives are restored?

      • keepamovin 2 days ago

        Hah! As if they need ideas. But that's not the point, how possible is it?

        Re your comprehensive edit, I totally am on board with that tech choice idea. It's a bet, avoid the fads, pick stuff that's robust (or at least a fit for your possible futures)

        • cookiengineer 2 days ago

          I'd say we have to differentiate between human error as an attack surface and software bugs / vulnerabilities as an attack surface here.

          Software-wise I wouldn't know where to start, honestly, because the internet archive as a project is so vast [1] that it's hard to get an architectural overview of how the pieces are glued together. Unifying the tech stack seems to have been no concern at all in its development...

          But from a pentesting perspective I'd try to find vulnerabilities in the perl based services first, then Java, then PHP, then NPM and so on... because older projects tend to have a higher likeliness of being unmaintained or using outdated libraries.

          [1] (~242 public repositories) https://github.com/orgs/internetarchive/repositories

    • emmelaich 2 days ago

      I hope that Google (for instance) has an occasional snapshot of everything tucked away somewhere on a tape in Norway or somewhere. Like the seed bank.

    • bubblesnort 2 days ago

      - openly speculate the tactic to preemptively address concerns

      • keepamovin 2 days ago

        Exactly! Red-team the situation to identify weaknesses, build defenses and devise overall strategy! :)

    • g-b-r 2 days ago

      Yeah, last time I checked they weren't doing any timestaping.

      They definitely should.

    • 2 days ago
      [deleted]
  • ChrisArchitect 2 days ago
  • Apocryphon 2 days ago

    The timing of Google getting rid of the Google Cache couldn't be even worse with these ongoing DDOS attacks on and necessary hardening of the Internet Archive. Wonder what kind of twisty narrative one could posit about why this is happening?

  • jaredb3 a day ago

    Fix the Internet Archive to be back online.

  • conormarcellus a day ago

    internet archive