I know of at least one US government web page that still references “Gulf of Mexico”, but I don’t want to link it, because I’m very curious to see how long it can fly under the radar. I have a thing set up that checks the page regularly and will alert me whenever it changes.
Is there a way to set up archive.org or something like that to save regular snapshots without risking drawing more attention to it?
Install ArchiveBox. Even if you don’t have a home server or VPS, you can run it on your regular PC - it’s just a Docker container, or if you don’t like Docker, you can run the Python code directly. http://archivebox.io/
That way, it’s under your full control, and you keep all the data.
For tracking changes to sites, changedetection.io is free and open-source if you self-host it. Just their remotely hosted version costs money.
I know of at least one US government web page that still references “Gulf of Mexico”, but I don’t want to link it, because I’m very curious to see how long it can fly under the radar. I have a thing set up that checks the page regularly and will alert me whenever it changes.
Is there a way to set up archive.org or something like that to save regular snapshots without risking drawing more attention to it?
Use archive.is and manually save it every hour?
archive.org is subject to takedown requests, so its pointless.
Install ArchiveBox. Even if you don’t have a home server or VPS, you can run it on your regular PC - it’s just a Docker container, or if you don’t like Docker, you can run the Python code directly. http://archivebox.io/
That way, it’s under your full control, and you keep all the data.
For tracking changes to sites, changedetection.io is free and open-source if you self-host it. Just their remotely hosted version costs money.
Setup a script to use wget to grab a copy of the page every six hours or so?