About · Why it exists · 25 April 2026
What follows are my actual thoughts. They've been crafted and rewritten with AI so they're bearable to read. If you're here and getting through this, you'll thank me — otherwise you'd have tuned out three lines in.

— Colin

A solo project, shipped in a fortnight, an experiment in AI development.

HOW{things}WORK is a one-person project. I'm Colin Burns, a software engineer in Melbourne (for now). I built this app over two weekends — about twenty hours of focused work — and what came out is roughly what a small team would normally ship in a quarter. An iOS app, two backend services, a content pipeline, self-hosted infrastructure, full CI/CD. Twelve months ago this would have been an ambitious side project. Six months ago, a tough quarter. Today: two weekends.

The motivation

There used to be a moat around our industry. Building anything substantial — a real application, with a backend, a database, a deployment pipeline, a mobile app, content — was hard. The difficulty was the reason the work was paid the way it was. Years of experience went into knowing which patterns to reach for, which mistakes to avoid, and how all the pieces fit together.

That moat is shrinking. That is both exciting and terrifying. The constraints I grew up with — the tedious month of boilerplate before you got to the interesting work, the typing tax of building anything end-to-end — they're dissolving. What's left is the part I actually liked: deciding what to build, deciding how it should be put together, and deciding where the failure modes hide.

I won't pretend it's not also unsettling. Every software engineer I know is feeling some version of the same conflict: I can ship things I never could before, sitting awkwardly next to and what does that mean for the way I make a living? Both are true. Both are worth taking seriously. This project was, partly, my way of looking at the conflict directly instead of around it.

Why a "how things work" app

The premise comes from a sci-fi trope my wife told me about. Someone travels back in time and finds they can name the things they brought from the future — microwave, satellite, insulin — but can't actually explain any of them. The mechanics are gone the moment they leave their century.

Honestly, I know nothing about the world. How does GPS work? How does a microwave actually heat food? I know that putting metal or aluminium foil in there is pretty spectacular, but I couldn't tell you why. I wanted a tool I could open while I'm in line at Starbucks waiting for the coffee — the physics of which I also don't understand — and read a quick, digestible snippet about something everyday.

The model is the ELI5 ("explain it like I'm five") subreddit, but a step up. Treat the reader as an adult with very little knowledge of the topic — you don't have to dumb it down to actually-five-year-old level. Plain language, but not babying.

I also wanted an excuse to build something new. I hadn't shipped an iOS app before. I'd never set up a content extraction pipeline. And I wanted to put this "AI" thing through its paces.

Twelve months ago this would have been a year-long side project. Today: two weekends.

How it actually got built

The structure of the work was driven by a product requirements document. I didn't type the PRD any more than I typed the code — I coordinated its development through conversation with the AI, the same way I coordinated the development of everything else. Treating the document as "real work I did" while treating the code as "the AI's work" would have been a strange line to draw.

From the PRD, Taskmaster broke the work down into discrete, ordered tasks with dependencies. Claude Code picked up tasks one at a time, wrote the implementations, and handed each piece back for review. My job was architecture, judgement calls, security review, and occasionally saying "no, that's not the trade-off I want."

The design followed the same pattern. I'm not a designer. I think I can tell good design from bad, but I can't produce good design from a blank page — there's a real difference between recognising taste and having the hands to express it. Halfway through the project, Claude Design was released. That gave me the missing third of the team.

The app's visual system started as a brief I worked out in a Claude conversation: an "academic atelier" direction — warm, editorial, deliberately not sterile EdTech and not gamified cartoon. I iterated on screens with Claude Design until the look felt right. For this marketing site, I asked it for eight variations against the app's visual language; this is the one I picked. A few of the others looked more aggressively editorial — newspaper-like, dense — but they came with hallucinated copy I'd have had to audit and rewrite from scratch. This one wore its constraints more gently.

The stack

Mobile

iOS app (React Native)

Built with Expo and React Native, offline-first via SQLite. Targeting iOS for now; Android falls out cheaply when I'm ready. The first mobile app I've shipped.

Backend

API + content pipeline

Two services: a Fastify API serving the app, and a content pipeline that drafts, edits, and ships explainers using LLMs with a retrieval layer.

Infrastructure

Self-hosted, end-to-end

Terraform-provisioned droplet, self-hosted Postgres and Redis, DO Spaces with a CDN for images. Tailscale for everything administrative.

Delivery

Custom CI/CD

A self-hosted GitHub Actions runner on a NAS at home, pushing images to a private container registry. Portainer manages deploys to the droplet via an Edge Agent.

What this actually felt like

The honest verdict: extraordinary, with footnotes. The AI did not replace the engineering. Architecture, storage strategy, security model, deployment topology, the dozens of small judgement calls that decide whether a system holds up — those are still decisions a human has to make. The AI was wrong often enough that you notice.

What it replaced, almost completely, was the typing. The line-by-line grind. Afternoons that used to disappear into boilerplate now compress into review sessions.

The moat hasn't disappeared. It's moved. Scott Werner has a wonderful piece called Warranty Void If Regenerated that imagines what software work looks like a few years into this transition: a country mechanic, working out of a corrugated steel shop, diagnosing the gap between what farmers specified and what their generated tools actually do. The skill that pays the bills isn't typing code anymore. It's reading specs against reality, choreographing how generated tools interact with each other, and translating the kind of knowledge that lives in someone's hands into language a machine can act on. It's fiction, but it doesn't feel far off.

That's roughly what building this app felt like, in miniature. The leverage is real. The work just looks different now.

How a topic gets made

The pipeline Many sources in, one topic out
Funnel content pipeline: many sources converge through Evaluate, fan out through grounded generation TOPIC NAME IN "How does a microwave work?" 01 Discover candidate URLs searches the open web * * * * * * * fetched, then extracted to clean text · 02 · 03 04 · THE GATE Evaluate LLM scores 0–1 · <0.3 reject · >0.6 keep REJECTED ACCEPTED 05 · 06 Chunk & Embed token-aware splits → vectors per chunk vector store grounded retrieval lives here SIMILARITY SEARCH · TOP-K 07 Pregenerate ELI5 · Go Deeper 08 Fullgen Full Picture 09 Quizgen 4 questions 10 Imagegen → CDN bucket 11 Related cross-category one grounded explainer out. ELI5 + Go Deeper · Full Picture · Quiz · Hero image · Related

Reads top to bottom. A wide field of candidate sources converges through Evaluate (the throat); survivors are chunked, embedded, and pooled in the vector store at the waist; five sibling stages fan back out below it. Four pull grounding back from the well in green; Imagegen sits in the row but doesn't retrieve — it's grounded by the topic name itself.

Coda · And then… From pipeline output to reader's pocket
Coda: Postgres to Fastify API to mobile app, with Spaces CDN serving images and then the app reads it. Postgres topic content, embeddings, quiz, links PIPELINE WRITES HERE Fastify API reads from Postgres, serves the app DIGITALOCEAN DROPLET JSON readers on demand, cached offline Spaces CDN images · globally cached images

A coda, not a diagram. Three steps in a line — the pipeline writes to Postgres, the API reads from Postgres, the app reads from the API — with Spaces CDN as the small offshoot supplying images directly to the phone. Same legend as the hero above: green is dynamic API traffic, soft dashed is image content.

How it gets deployed

Vertical flow From the stairs to the world
Vertical architecture: home zone above, cloud zone below, users at the bottom HOME · MELBOURNE developer git push origin api-v0.1.6 GitHub tagged release · triggers build webhook Home NAS SYNOLOGY DS923+ · UNDER THE STAIRS Actions runner picks up the job · builds Docker Private registry images stay on the NAS — PRIVATE NETWORK — Tailscale DIGITALOCEAN · SYDNEY REGION DigitalOcean droplet PORTAINER EDGE AGENT · PULLS FROM HOME Postgres Redis Fastify API Content pipeline api.howthingswork.app serves readers directly Spaces CDN images · globally cached readers curious adults around the world JSON images

The home zone is one warm tonal block; the cloud zone is another; the Tailscale dashed line is the only thing connecting them. A NAS in a house in Melbourne, talking to a single droplet, talking to the world.

By the numbers

~20
Hours of focused work
2
Weekends, start to ship
1
Engineer at the keyboard

What's next

I'm planning to make the repository public once I've finished a more targeted security review. The interesting artefact isn't the app itself — it's the shape of the project. What a solo developer can architect and ship right now, with the right tools and a clear plan. If even one person reads this and thinks I could build the thing I've been putting off, the page has earned its keep.

I am sure other software developers will review this code and say "I could do better", or nitpick, this or that, but I would argue that those same software developers would do that whether this was AI written or developed by the "best" software team in the world. And, again, that's not the point of this project.

In the end I really hope that people use this app, because if that happens I'll be able to learn a whole new set of things about how AI writes code. One of the things I tried to do through this development was to not over engineer or prematurely optimise things too much. Hopefully I get to see if that worked.