Lichendust

I'm Harley, an artist, animator and programmer.
I make all kinds of useless stuff.

Colophon

🌲
🧵
How this site is built/maintained
CONTENTS

The short version is this...

This website is a portfolio, a blog (of sorts) and a digital garden.

A digital garden is primarily defined by having content that evolves and changes over time, rather than being produced and published at a fixed date. Information changes, is rewritten with new information and updates naturally. The majority of this site behaves like this.

Digital gardens are structured more like wikis than traditional top-down navigational sites. Clicking around and finding things is often the point, to notice unexpected connections. I don't push this too heavily on the website — I'm more interested in curating finished notes with whole ideas than linking many tiny concepts or fragments.

This site is constructed from a motley mix of Markdown and my own static site builder/templating language Spindle.

Spindle is a very lean Odin program hosting a Lua runtime, which is where most of the program's core logic is. Out of the box, Spindle is a static site builder but its construction and extensibility means it's more of a static site builder-builder.

Obsidian

I mark Obsidian notes for publication in their frontmatter. A Bash script runs over the entire vault, grep-ing notes with that value in them. The script uses sed to insert some common default metadata the website needs. All of those defaults are overridable by setting them in the original note. There are also a handful of decisions that get made about structure during this copy operation for pretty URL reasons, so this —

Section/Section.md (folder note)
Section/First Sub Page.md
Section/Second Sub Page.md

— becomes this on site —

section.html
section/first-sub-page.html
section/second-sub-page.html

I don't use index pages because Cloudflare Pages, my static host of choice, appends trailing slashes to the URL of directory indices. I find this really annoying, so this structure guarantees consistency on every page because they're always pages. Also, every host will treat them the same and I don't have to think about it if and when I migrate.

Spindle is extensible, so I can add things likes wikilinks directly to the project, and especially wikilinks that evaluate with the behaviour I choose: when an Obsidian page is published that links to an unpublished page, that link just becomes text instead of a dead or red link. I try to always write my links into prose in such a way that the presence or absence of the link is irrelevant, so that when I do eventually publish that page, it will just magically integrate everywhere without intervention on my part.

Once Spindle has amalgamated all of these different inputs into a site, it spits out all of the processed and resized images into one folder, which get synced to Cloudflare R2 with rclone, and all of the static HTML files for each page into another, which get checked in to a Git repo and deployed to Cloudflare Pages.

All this means the hosting for this site is extremely simple, portable to essentially any other server in the world and there is no tracking beyond the basic visitor metrics that Cloudflare provides.

Scripting

I wanted this site to be fully functional without Javascript. No content load or display should ever depend on Javascript. This site is designed static-first, where the default experience is designed first and then 'upgraded' later if Javascript is enabled. For example —

While I consider the carousel to be essential to the presentation of my work, people who disable Javascript by default will be used to the mild quirks that happen without them, such as images needing to be right-click-opened.

I also add or change relevant styles: images do not react to hovering unless upgraded, which means elements always telegraph their current level of behaviour to visitors.

Structure

The Portfolio

I started out with the classic portfolio structure of having 'a portfolio' page, whether that's 'projects', or 'works' or whatever word you wind up settling on. Everyone seems to do this and it's probably the correct thing to have... but I kept having this odd feeling about some of the things I wanted to publish in there, those projects that are still worthwhile, tangible creations in their own right, but that perhaps don't carry the same weight as others. Maybe they're abandoned or cancelled, maybe I just made an experimental thing for myself, or maybe it's just a thought experiment I never plan on pursuing.

That's where the laboratory came from. While works is all 'proper' projects, the expectation of the Laboratory is that of things that may never be published or that aren't available to download or to watch or play. Sometimes, an idea is just a bad one, or an exploration never intended to be finished, but also... the design work was neat, or the concept art was interesting.

Carving out space for imperfect work has helped with the nervousness of sharing ideas that aren't polished or give me impostor syndrome, or even that I'm just disappointed at having quit on, despite having my reasons. Giving myself a staging ground for work that isn't good enough to be called a 'capital-W Work' is freeing, in some way. Not everything must be perfect.

So because this site is made up of two different sources of content, one of the things I struggled with is designing a sensible navigation for it. There are two immediate problems I hit when thinking about publishing my wiki —

  1. How do I publish subsections of it without structural failure?
  2. How do I build an outsider-friendly navigation system?

The first one ended up being partially solved for me, in that I already write my inline links in such a way that their absence won't be missed — the same way Wikipedia's links work, built into the prose itself. Some finagling with my static site builder to implement wikilinks and I have a way where links to unpublished pages simply vanish without a fuss — if the page is missing, just return the label.

The second point I still don't have an answer to. Some digital gardens are entirely designed around link-hopping, like Anne-Laure Le Cunff's Mental Nodes, while others have rigidly-defined top-down navigation systems, like Devine Lu Linvega's XXIIVV.

You can see my basic attempt at providing an overview with the garden page, a sort of pseudo sitemap for the whole thing. It's not perfect, but I'll figure something out eventually.

Appendix

Overloading the Parser in Spindle

Spindle's internal parser is actually backed up internally so you can't permanently break it inside of a build by doing this. Even if you neglect to reset it, it will fix itself. It does this because pages get constructed a tree-like pattern from the root index page based on links, which is to say if a page is unreachable, it won't be built. Without this unprotected overload, any pages downstream of a Markdown page would now be parsed with the wrong parser. In practice however, it's safe to do this.

spindle.handlers[".md"] = function(file_path)
    local overload = spindle.parse_markup
    spindle.parse_markup = parse_markdown

    local page = spindle.load_page(file_path)

    spindle.parse_markup = overload

    page.canonical_url = page.canonical_url:gsub("%.md$", "")
    page.output_path   = page.output_path:gsub("%.md$", ".html")

    return page.output_path, page.canonical_url
end

RSS Feed

My microfeed page has an accompanying RSS feed, which I serve from the site natively. I toyed with ways of constructing this page a few different times, until it occurred to me that the answer was right in front of me all along.

Spindle is designed to handle HTML as a mix-in to its own syntax. This is to say, it doesn't understand HTML, but it knows not to mess with things in angle brackets.

Well, isn't XML just angle brackets of a different flavour?

spindle.handlers[".xml"] = spindle.handlers[".x"]

So, now I can use the same syntax I use in Spindle to construct the microfeed...

main {
    > content_heading

    This feed is a tiny blog.
    You can subscribe via [RSS](microfeed.xml).

    ~ feed microfeed/2025-05
    ~ feed microfeed/2025-04
}

...to also build the RSS feed!

<channel>
    <atom:link href="https://lichendust.com/microfeed.xml" rel="self" type="application/rss+xml"/>


    ~ rss then/2024-09
    ~ rss then/2024-08
    ~ rss then/2024-07
</channel>

This rss partial just wraps the entire page content in a feed entry and the ![CDATA[]] markup. It does this with plain, primitive HTML templating without classes and styling and such. The only exception is some inline syntaxes, like links and wikilinks, which are centrally defined in the site's Lua config, so overriding them is a bit of a pain; something to fix in a new version of Spindle.

So because some random attributes and classes might sneak through, I wrote a very simple Lua function that just strips them out, leaving any HTML string passing through it as plain old tags. We leave things like aria attributes alone, of course.

function remove_classes(page, text)
    return text:gsub('%s+class=%b""', ""):gsub('%s+id=%b""', "")
end

. <description><![CDATA[
    $ remove_classes %content
. ]]></description>

Is this a horrible, hacky bodge? Yes! Is it also very simple and essentially foolproof insofar as how this site is designed? Also yes. Ship it.

The only thing I can't do is share HTML snippets as code blocks in my RSS feed, because they'll get mangled, but I don't really want or need to do that anyway. I could always build a more detailed version of remove_classes if I absolutely had to, one that would respect the contents of <pre><code> tags.

Embedding Resources

I've been playing with the idea of shipping scripts and styles within the page payload. Reducing request count is one of the main ways to speed up page load, so I've been wanting to try it and test it in the wild.

I know from the microfeed that Spindle is more than happy to be mildly abused to insert the entire content of one page into another (each post is an individual note originally, after all), but it can't do that with CSS. Spindle doesn't understand Javascript or CSS and it will mangle it if you try and write it inline.

I already use a program called minify to do my pre-package minification, which just gets run on the static directory by the main build script. Spindle produces minified HTML already.

So I figured I'd just try, instead of linking these resources, importing them directly into the page. Doing it via a function prevents Spindle from mangling them, as mentioned. In the site's config.lua, I created a function minimport to handle this and to cache the result locally so it doesn't call minify for every single page.

local minimport_cache = {}

function minimport(page, args)
    local path  = args[1]
    local cache = minimport_cache[path]
    if cache then
        return cache
    end

    local handle = io.popen(string.format("minify %q", path))
    local result = handle:read("*a")
    handle:close()

    minimport_cache[path] = result
    return result
end

I can call this function (because it has the right argument pattern) from Spindle text directly:

<script>
    $ minimport static/script.js
</script>
<style>
    if %override_stylesheet {
        $ minimport static/%override_stylesheet
    } else {
        $ minimport static/style.css
    }
</style>

And now every page has its stylesheet embedded directly. I'm not doing this right now on the version of the site you're reading, mostly because minify doesn't support nesting in CSS yet and because Spindle has a bug where it somehow somewhere gets ahold of this return value and runs a rogue internal formatter on it that I can't track down, because $ script call results are pretty sacredly protected from further manipulation normally.

WORD COUNT
2076
LAST UPDATED
2026-01-16
BACKLINKS

About

New Website New Me

Slashpages