digital preservation is a losing battle. Current archiving initiatives are curated by state-affiliated organisations with selection bias against low culture. Additionally, preservation web crawling at scale is unable to capture the complex and dynamic web technologies that low media is often hosted on - this content is gated, short-lived and amorphously located. A two birds one stone approach is to encourage the widespread creation of personal archives, utilising standardised preservation tools. Targeted amateur web archivists can scrape corners of the internet large crawls can’t find - or get around technological limitations with guerilla archiving methods, preserving individual collections to garner a more representative sample of internet topology. Why How Fork the github repo of my project Run through the read me of the project to set up dependencies Collect media of any mimetype that you consider to be valuable. Aim to preserve it in it’s original context. An example of this would be instragram content. Try to crawl it using Instaloader, but if your preservation tactic involves exporting JPGs and crawling this as a single asset, that is just as good.