I'm archiving this version of the site, so I can rebuild it using Hugo (a static site generator, which should be perfect for me).
I am doing this because I'm sick of manually updating things; I now have enough content I need some automation. The second reason is to build some experience with Hugo, because I intend to use it in the future for another site :)
I want to make sure to keep a backup of this version, without it being hidden behind some branchname or whatever. It always annoys me to find something important being hidden in a branch or tag.
The website is all handwritten HTML, CSS and JS.
Anything else is bloat, really (unless browser creators finally replace JS, but that's not going to happen...)
"You might not need jQuery" - A great resource if you want to stop using jQuery (*cough* bloat)
I've added an extensions.json
file, which once you open the project in vscode, a popup will ask you to install recommended extensions.
This enables beginners to start faster (not to mention get informed on interesting addons like TabNine for example).
In alphabetical order.
You'll only see this one once you go to a URL that doesn't exist. The reason why this 404 page is superior to others is that mine has an archival service *and* a site-search built-in.
Exists for hosting purposes. Redirects thaumatorium.github.io to thaumatorium.com.
The frontpage of the site. The only file with a ton of comments, explaining why the HTML structure is the way it is.
Contains information for sites that want to become a Progressive Web App - I used to have this funcitonality, but had to remove this functionality because it broke updating the site. Only works on Chromium based browsers as of writing.
Information source: https://www.w3.org/TR/appmanifest/ https://w3c.github.io/manifest/
Support level: https://caniuse.com/#feat=web-app-manifest
This used to startup the PWA functionality of the site, but now it functions as general holder of code, as every HTML page contains a link to this file.
A file that tells crawlers/bots what they can and can't access (currently they're allowed to crawl everything). Of course crawlers could just ignore this file, but the big ones (GoogleBot, Bingbot, DuckDuckBot, etc) honor this file.
See the sidebar!