[{"content":"I spent way too much of my life avoiding having to make decisions and priorities. My wife and I have a goal to \u0026ldquo;do one thing on the house every day\u0026rdquo;. This usually means fixing baseboard, painting, adjusting cabinet doors, installing new cabinets. It ranges from small to big tasks and looking at the spreadsheet is very overwhelming. So work tends to happen in spurts.\nAt the same time, cleaning and meal planning are two areas where I need to invest the mental effort to keep track of it and plan it out, but just haven\u0026rsquo;t made a focused effort. It\u0026rsquo;s the stupid stuff like cleaning the dishwasher filter (which you should be doing), dusting ceiling fan, vacuuming out the tracks in the windows, and so on. The list is nearly endless.\nRather than change my ways, let\u0026rsquo;s do the person working in tech thing and throw technology at the problem and tell ourselves that it will solve it.\nPresenting: honeydew\nA small platform for managing house work, house cleaning, and meal planning.\nThis is not an advertisement to use my new platform. In fact, don\u0026rsquo;t. I\u0026rsquo;ve been using it for four or five years now and have no intention of sharing it widely. I break it frequently and don\u0026rsquo;t want to be on the hook for it.\nI\u0026rsquo;ll talk about how I made it, why I\u0026rsquo;m talking about it now, and the outcomes.\nThe Stack The tech stack is actually almost entirely typescript (I really like it).\nI looked into a few options for hosting but settled on Cloudflare pages. I was seriously impressed by their deploy times and decent documentation. Despite working on Azure in the past and having free credits, the deploy experience was terrible. Nuxt, Astro, and Qwik were all evaluated but ultimately I went with Vue as it\u0026rsquo;s what I know. Perhaps in the future I\u0026rsquo;ll swap it out.\nThere are two main ways to access this: the webapp and Telegram. I\u0026rsquo;ll cover each independently.\nData Storage I originally started with Cloudflare KV since their SQL database was still in private beta and I didn\u0026rsquo;t want to bother my one friend at Cloudflare and ask for special treatment. That meant waiting in line like everyone else. I had a decent abstraction layer over the datastore, so when D1 finally became available, the migration was relatively painless. Now I\u0026rsquo;m using Cloudflare D1 (their SQLite-based database) via Kysely as an ORM, with KV still handling caching, sessions, and things like magic link tokens. All dates are stored as Julian Day Numbers, which makes date math dead simple — \u0026ldquo;how many days since this chore was last done?\u0026rdquo; is just subtraction.\nThe Webapp The frontend is Vue 3 with Pinia for state management and tRPC for type-safe API calls. This means the frontend and backend are just magically in sync while sharing code. It\u0026rsquo;s styled with Bulma and a Nord color theme because I like blues and greens.\nThe webapp is where most of the UI side of things are. You add chores with a name and a frequency (in days), and the system figures out who should do what and when. There\u0026rsquo;s a scoring algorithm that prioritizes the most overdue chores while penalizing ones that were recently assigned. You can assign chores to specific people or leave them as \u0026ldquo;Anyone\u0026rdquo; for the auto-assigner to figure out.\nI\u0026rsquo;m still not happy with the meal planning. I tried to scrape the recipe using a chain of parsers — it tries site-specific scrapers first (America\u0026rsquo;s Test Kitchen, EveryPlate, Joshua Weissman, etc.), then falls back to generic Schema.org JSON-LD parsing. I had grand plans for automated meal plan generation but that\u0026rsquo;s still a stub. With the rise of agents, I would love to figure out some sort of smart meal planner that can look at local sales in my area and place a pickup order.\nHouse projects are basically multi-step to-do lists with dependency tracking. Each task can have up to two prerequisite tasks, and the system only surfaces tasks whose dependencies are complete. This is what we use for the house renovation stuff — \u0026ldquo;install cabinet\u0026rdquo; depends on \u0026ldquo;buy cabinet\u0026rdquo; depends on \u0026ldquo;measure space\u0026rdquo;. Breaking house projects up into small tasks is not easy.\nThere\u0026rsquo;s also a clothing inventory system that I added because why not. You can track what\u0026rsquo;s in your closet, how many times you\u0026rsquo;ve worn something since its last wash, and get outfit suggestions based on the weather. That last part is still pretty rough.\nAuthentication is passwordless — you sign up with just a name and an optional household invite key, validated with a Cloudflare Turnstile captcha. Sessions use JWT tokens stored as cookies. If you need to log in from a new device, you generate a magic link (a 50-character random key with a 1-hour TTL) that you can access via the app or Telegram. I started a recovery mechanism but couldn\u0026rsquo;t be bothered to hook up an email.\nTelegram The Telegram bot was going to be the killer feature. I really thought getting a push notification with your chore for the day and being able to tap \u0026ldquo;Done\u0026rdquo; right there in the chat would move the needle.\nLet me ask you a question: how effective are the notifications from duolingo or other apps? For some people, they work amazingly (a friend recently showed me their ~2630 day streak). For some, they actively hinder doing it.\nDuolingo to their credit has published several research papers such as this one about how to send a notification at the right time. Rather than trying to implement their complex algorithm, I just added a streak system.\nIt somewhat helps but it hasn\u0026rsquo;t been the boost I was hoping for. Which I should have known, the streak in duolingo isn\u0026rsquo;t that big of motivation for me.\nWhy Talk About It Now? One word: AI.\nSuddenly, this goes from a hand rolled project to something that is flexible and moldable. I can be at the park with my son and while he\u0026rsquo;s swinging, I can send a quick message to claude \u0026ldquo;hey- can you please fix the chore assignment algorithm, I\u0026rsquo;ve been told to clean the bathroom mirrors for the last three days in a row. I\u0026rsquo;ll do this weekend but I might do something else if asked\u0026rdquo;, then go back to pushing him on the swing, checking the PR when I next have a free couple of minutes.\nWhatever your feelings about AI and absentee parenting, I will freely admit that this has been a double edged sword. It has sucked a lot of the enjoyment out of the project. While I know the codebase enough to be able to accurately review the PRs, it isn\u0026rsquo;t as fun as doing it myself. On the other hand, keeping NPM packages working together is tedious and I don\u0026rsquo;t want to do it. Finding the time to sit down at the computer for an hour straight is very challenging.\nI have some thoughts on how to make claude work for me when using mobile. Things like github CI pipelines and pipelines that post screenshots so I can more easily see what the new features look like. There have been a few changes where it just becomes too much for me to review on the phone screen.\nIt has become a personal tool that does whatever I want it to. Want a clothing inventory and outfit recommendation system? Go for it. No need for a separate app.\nFinal Thoughts The easier solution is to just have a monthly, quarterly, and yearly checklist. Just decide this Sunday or Saturday is going to be checklist day. Done.\nIt\u0026rsquo;s what some people have done for years and it works fine. Is it cool that I have a digital nag that is really obsessed with my personal situation? Maybe.\nIs it cool that I have a small ecosystem that I can use for whatever I see fit? Absolutely.\nWould I recommend building your own? Probably not. But if you\u0026rsquo;re the kind of person who reads blog posts about household management systems built on Cloudflare, you were probably going to do it anyway.\n","permalink":"https://matthewc.dev/projects/_honeydew/","summary":"I spent way too much of my life avoiding having to make decisions and priorities. My wife and I have a goal to \u0026ldquo;do one thing on the house every day\u0026rdquo;. This usually means fixing baseboard, painting, adjusting cabinet doors, installing new cabinets. It ranges from small to big tasks and looking at the spreadsheet is very overwhelming. So work tends to happen in spurts.\nAt the same time, cleaning and meal planning are two areas where I need to invest the mental effort to keep track of it and plan it out, but just haven\u0026rsquo;t made a focused effort.","title":"My Homemade Robot Nag"},{"content":"Every year, the Social Security Administration publishes data on every baby name registered in the United States. The dataset stretches all the way back to 1880, covering over 140 years of naming trends, fads, and cultural shifts captured in a simple list of names and counts.\nI was curious about names that aren\u0026rsquo;t super popular but are consistent. Most people know how to pronounce and spell the name Ellen. But you don\u0026rsquo;t meet that many Ellens. When you go to a tourist trap, you probably won\u0026rsquo;t find Ellen on a cheap license plate keychain. And when you do meet an Ellen, are they 80 or 30? You probably can\u0026rsquo;t tell.\nAre there names like this that:\nAren\u0026rsquo;t incredibly common, but common enough that people recognize them Don\u0026rsquo;t have a specific era tied to them One note: the data isn\u0026rsquo;t merged extensively. William and Will are treated as separate names. There are techniques for merging variants by converting them to phonemes, but I can\u0026rsquo;t be bothered.\nWhy rank? The SSA data includes raw counts: how many babies were given each name every year. But raw counts and percentages are misleading for comparing across eras.\nIn 1880, roughly 8% of all male babies were named William. Today it\u0026rsquo;s closer to 1%. William is still incredibly popular (it has never left the top 20) but over the past 150 years parents have diversified dramatically. The total pool of names in use has exploded, so every individual name\u0026rsquo;s slice of the pie shrinks regardless of how popular it actually is relative to other names.\nRank sidesteps this. It measures relative popularity: is this name #1, #10, or #100 compared to everything else that year? The charts below show the difference.\nSame name, two lenses. Left: William\u0026rsquo;s raw share of male births, which looks like a long steady decline driven by naming diversity, not a real drop in popularity. Right: William\u0026rsquo;s rank, near the top for over a century.\nThe Chart Search for any name, or pick a preset to explore different eras. Hover over a line to see details, click to pin it. The Y-axis shows rank (1 = most popular).\nThe Timeless Names Some names are fads with spikes. But others are remarkably stable across generations. If someone tells you their name is \u0026ldquo;Kyle,\u0026rdquo; you can probably guess they were born in the \u0026rsquo;90s. There are even memes about it on the internet. But some names give away nothing.\nThe chart below shows the names with the flattest rank trajectories: the ones where knowing the name tells you almost nothing about when the person was born.\nHere is how I have chosen to calculate the score.\nscore = std + (worstRank - bestRank) * 0.5\nThe Hidden Middle This brings up my next question. Many of the \u0026ldquo;flat\u0026rdquo; names are quite popular, which isn\u0026rsquo;t what I\u0026rsquo;m trying to find. What about the second string names? Staying in the middle band, decade after decade, never quite fading to obscurity.\nThe chart below ranks names by a combination of low standard deviation and lower average rank, steady and relatively popular within the 50–2,000 band. We filter out anything that ever peaked above rank 50, and anything that ever dropped below rank 2,000.\nscore = std_dev × 3 + avg_rank\nWhat the Data Tells Us The Jennifer Effect. Some names explode onto the scene and dominate for a decade before fading just as fast. Jennifer went from obscurity to #1 in 1970 and stayed on top for over a decade before falling off a cliff. The same pattern repeats with Jessica in the late \u0026rsquo;80s and Ashley in the \u0026rsquo;90s.\nBiblical names endure, until they don\u0026rsquo;t. Mary held the #1 spot for girls from 1880 to 1946 (67 years). James and John dominated the boys\u0026rsquo; charts for nearly a century. But since the 2000s, even these stalwarts have slipped as parents increasingly seek distinctive names.\nThe diversity explosion. In 1950, the top 10 names accounted for a huge share of all babies. Today, naming is far more distributed. There are more names in the top 200 that weren\u0026rsquo;t there a decade ago. We\u0026rsquo;re in the most diverse naming era in American history.\nPop culture leaves fingerprints. You can spot cultural moments in the data: Shirley (Temple) in the 1930s, Elvis never quite cracking the mainstream, Arya climbing after Game of Thrones. Names are a mirror of what a generation was watching, reading, and listening to.\nMale names tend to be more stable. Looking at the most stable names, they are overwhelmingly male. Top male names tend to stay at the top for a very long time.\nData source: Social Security Administration . Only names with at least 5 occurrences in a given year are included in the SSA dataset.\n","permalink":"https://matthewc.dev/musings/baby-names/","summary":"Every year, the Social Security Administration publishes data on every baby name registered in the United States. The dataset stretches all the way back to 1880, covering over 140 years of naming trends, fads, and cultural shifts captured in a simple list of names and counts.\nI was curious about names that aren\u0026rsquo;t super popular but are consistent. Most people know how to pronounce and spell the name Ellen. But you don\u0026rsquo;t meet that many Ellens.","title":"How flat is your name?"},{"content":"Like many coders, I participated in Advent of Code 2025. However, this year I want to make it challenging. Learning Rust or Elixir are both tempting options, but why not go for something a bit more out there?\nI want to solve every problem with the GPU not the CPU. Which means a compute shader.\nI chose to use Swift and Metal as my shader language. Mostly because I was using a Mac and partly because I\u0026rsquo;ve been meaning to improve my Swift skills as well.\nThe person who runs Advent of Code has asked not to repost or copy the actual content of Advent of Code ( https://adventofcode.com/2025/about ) and walking through each individual problem is rather boring for an article.\nSo let me talk about my general thoughts and the experience. Much of what I could say has been said in other blogs and the fantastic series of videos from Sebastian Lague ( https://www.youtube.com/@SebastianLague/videos ) where he uses compute shaders to explore various problems.\nThe Code The basic layout I copied from problem to problem was was:\nThe shader code Compile the shader Parse the input and create GPU buffers Kick off the shader, parse the result Compute a verification using the CPU to make sure it worked This is the basic code in swift.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 import Metal import Foundation let source = \u0026#34;\u0026#34;\u0026#34; kernel void compute_shader(device int8_t* inData [[buffer(0)]], device int* constants [[buffer(1)]], device atomic\u0026lt;uint\u0026gt;* results [[buffer(2)]], uint id [[thread_position_in_grid]]) { /* shader goes here */ } \u0026#34;\u0026#34;\u0026#34; guard let device = MTLCreateSystemDefaultDevice() else { fatalError(\u0026#34;Metal is not supported on this device\u0026#34;) } guard let commandQueue = device.makeCommandQueue() else { fatalError(\u0026#34;Failed to create commandQueue\u0026#34;) } let (inData, rowLength, startIndex) = readInFileAndGenerateData() let count = inData.count // Create the buffers that go into the GPU let bufferA = device.makeBuffer(bytes: inData, length: MemoryLayout\u0026lt;Int8\u0026gt;.stride * count, options: [])! let constants: [Int32] = [rowLength, Int32(count), Int32(startIndex)] let bufferB = device.makeBuffer(bytes: constants, length: MemoryLayout\u0026lt;Int32\u0026gt;.stride * constants.count, options: [])! var initialValue: [Int32] = Array(repeating: 0, count: count) let bufferResult = device.makeBuffer(bytes: \u0026amp;initialValue, length: MemoryLayout\u0026lt;Int32\u0026gt;.stride * initialValue.count, options: [])! // Create the shader library let library = try! device.makeLibrary(source: source, options: nil) // Find the function, doesn\u0026#39;t matter what it is called as long as it matches let function = library.makeFunction(name: \u0026#34;compute_shader\u0026#34;)! // Create a compute pipeline state let pipelineState = try! device.makeComputePipelineState(function: function) // Create a command buffer and encoder let commandBuffer = commandQueue.makeCommandBuffer()! let encoder = commandBuffer.makeComputeCommandEncoder()! // Set the compute pipeline and buffers encoder.setComputePipelineState(pipelineState) encoder.setBuffer(bufferA, offset: 0, index: 0) encoder.setBuffer(bufferB, offset: 0, index: 1) encoder.setBuffer(bufferResult, offset: 0, index: 2) // Calculate thread group size let workers = count * 10 let threadGroupSize = MTLSize(width: min(pipelineState.maxTotalThreadsPerThreadgroup, workers), height: 1, depth: 1) let threadGroups = MTLSize(width: (workers + threadGroupSize.width - 1) / threadGroupSize.width, height: 1, depth: 1) // Dispatch the compute kernel encoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupSize) encoder.endEncoding() // Commit and wait for completion commandBuffer.commit() commandBuffer.waitUntilCompleted() // Read back the results let resultPointer = bufferResult.contents().bindMemory(to: Int32.self, capacity: count) let resultBuffer = UnsafeBufferPointer(start: resultPointer, count: count) // Parse the resultBuffer and print it let actualResult = resultBuffer.filter({$0}) // Sometimes we allow small amounts of parsing print(\u0026#34;✅ Compute Shader completed successfully!\u0026#34;) print(\u0026#34;Total accumulated value: \\(actualResult)\u0026#34;) It is not the best swift. I could have had xcode compile the shader for me and have a far better experience. But I couldn\u0026rsquo;t be bothered.\nFor once, the documentation was fairly solid. I used these two resources: the metal guide and the metal shading language spec Xcode was a miserable experience and yes I work for the company that makes Xcode (opinions are my own) and I do my best to avoid it.\nChatGPT also will fairly confidently provide incorrect information about metal shaders, I\u0026rsquo;m guessing it thinks it is valid in HLSL or other shader languages?\nDumb Solutions The GPU has a certain way of working. It does not support recursion. It can run massively parallel. Managing state of the threads is ugly and requires atomics for correctly ordered operations. Metal to my dismal does not support atomic operations on single bytes (uint8_t). It supports int/bool/long/float though long is not supported for every atomic operation.\nOne particular problem for 2025 involved a bunch of stacks of boxes on shelves in a grid. How many stacks can be picked up (at least one side is free)?\nOn a CPU you would just step through each box and check the surrounding 4 boxes, accumulating some sort of count as you went.\nOn a GPU, why not compute every box at the same time?\nEach GPU thread gets an index to help it compute differently from its cousins. You could pass in the index of the box each thread is supposed to compute, then have the thread walk the grid until it finds the nth box. But\u0026hellip; What if you just ran the thread on every cell in the grid?\nNvidia calls them warps, AMD calls them wavefronts, and for Apple Silicon they are threadgroups.\nFor the M1, Apple claimed that 24,576 threads could be run simultaneously.\nSo kicking off a 156x156 grid is totally reasonable. If that grid position isn\u0026rsquo;t a box, just return early.\nOf course, each threadgroup runs the exact same instruction stream so when there is an if, the lane (a single thread) is marked as inactive until that particular if is finished. It\u0026rsquo;s one of the reason recursion isn\u0026rsquo;t supported.\nAnyway- running on an absolutely massive grid shows where the GPU shines. My actual code had profiling it (which I removed for readability) and the GPU took 5ms almost every time. The CPU could chunk through a small grid way in sub-millisecond times. But as soon as you started to get to larger grids, the CPU ground to a halt and the GPU stayed at a consistent 5ms.\nMonte Carlo Another one of the problems involves a plinko board like system. The task is to assume an infinite number of balls falling down the plink board and calculate how many pegs were hit.\nEvery peg that can be hit, will be hit.\nMy first thought was to have a result buffer accumulating how many times each peg had been hit and the left or right path taken. Send several thousand threads each starting at the top, walking through the path, atomically adding to the result. They then just take the path that was less traveled.\nThis is a technique known as Monte Carlo used for rendering using a ray tracer as it is impractical to calculate every single path that a ray takes. The basic idea is that by using slight amounts of randomization, you can get very close to the right answer.\nHowever, that doesn\u0026rsquo;t work well for week of code as you need the exact answer not just one or two off. By massively increasing the number of threads, I could get the right answer on the smaller plinko board with just 8 layers. But moving to the real input that had 50 layers meant the likely of a ray getting to a particular peg dropped significantly.\nInstead what I did was borrow the approach from above and have a thread for every peg on the board. It traversed up to see if it there was a peg on the left or right that a chip could bounce off of. Then it looked to see if that peg was marked as unknown, dead, or alive. If it was unknown, the thread kept running until all the found pegs were dead or alive. If any were alive, this peg got marked as alive. If they were all dead, this peg got marked as dead. If it ran into a peg (ie directly underneath a peg) without finding any other pegs, it would be dead.\nGraph Traversal As Advent of Code went on, it quickly got to some problems that GPUs tend to struggle with.\nOne particular path gives you a directional graph with a start node and an end node, with no loops. It then asks how many unique paths out are there?\nThere are many good articles and research papers out there about the topic, I particularly liked this one by Yangzihao Wang and John Owens.\nA local stack for each thread allows it to iterate through nodes and add new nodes to explore. However, figuring out what other threads have already looked at or computed becomes tricky.\nIn this case, I did a thread per node and calculated the number of unique paths from a given node to the end node. This was calculated by adding up all the counts of its neighbors.\nSo it ended up looking similar to the plinko problem just not on a grid. Most of the code ended up being creating the adjacency list (rather than a matrix which was inefficient since the graph was very sparse).\nConstraint Solvers The problem that gave me the most trouble was a packing problem. Given a grid of a certain size (5 by 10) for example, and a set of shapes (two 2x2 squares, 5 L shaped pieces, and 1 T shaped piece) can they fit in the required shape?\nThis is just begging to be a constraint solver problem, particularly after reading the article by Jeremiah Crowell about something very similar.\nThis is an exact cover problem in that each item is included once. So Algorithm X/ Dancing Links would work here. As far as I know, there isn\u0026rsquo;t an implementation of Dancing Links on the GPU. I found the paper GPU-accelerated Matrix Cover Algorithm for Multiple Patterning Layout Decomposition which was sadly light on details.\nThere are some constraint solvers on GPU for example Turbo which is built on CUDA.\nThe approach I went with was that each thread explodes a different starting placement/shape/rotation and then uses a stack to try all the solutions it found. With a 60x60 grid, 5 shapes, and 8 possible rotations (including mirroring), the number of starting points can reach 27k easily. Once the thread finds a solution, it bails with success.\nWe just need one thread to report success, we don\u0026rsquo;t care what the actual layout is.\nSince the GPU doesn\u0026rsquo;t do dynamic memory, there is a limit to the grid pre-built into the shader.\nDebugging The experience of debugging compute shaders is lackluster to say the least. It\u0026rsquo;s also interesting that when you have a bug in has real consequences on your computer as the display glitches. Since the shader is compiled at runtime, you don\u0026rsquo;t get warnings until you run (though having xcode compile it might have fixed that). There are no printfs so I would often have a status buffer (one element per thread) for threads to write some sort of status into. This made it easier to at least tell what the threads were doing.\nConclusion Working through Advent of Code on the GPU was unique. It’s a completely different way of thinking about problems. On the CPU, you can rely on recursion, flexible data structures, and sequential logic. On the GPU, you’re forced to think in terms of massive parallelism, explicit memory management, and what happens when thousands of threads are doing exactly the same thing at the same time. You quickly learn which problems are GPU-friendly and which ones are best left on the CPU. Debugging was tedious, sometimes chaotic, and yes, occasionally my screen glitched. Having to reason about atomics and data access was actually much easier than I expected. If nothing else, it’s a fantastic exercise in mental flexibility.\nIn the end, I learned a lot about parallelism. And honestly, that’s more rewarding than any single Advent of Code answer.\nI\u0026rsquo;m already thinking about what else I can do on the GPU in a compute shader. I\u0026rsquo;ve been thinking about making an SDF solver for ages (use a shader to calculate an SDF that best fits a mesh). Let\u0026rsquo;s see if I keep the compute shader train going in 2026.\n","permalink":"https://matthewc.dev/musings/advent-code-2025/","summary":"Like many coders, I participated in Advent of Code 2025. However, this year I want to make it challenging. Learning Rust or Elixir are both tempting options, but why not go for something a bit more out there?\nI want to solve every problem with the GPU not the CPU. Which means a compute shader.\nI chose to use Swift and Metal as my shader language. Mostly because I was using a Mac and partly because I\u0026rsquo;ve been meaning to improve my Swift skills as well.","title":"Advent of Code 2025 with Compute Shaders"},{"content":"If you\u0026rsquo;ve been unlucky to converse with someone who has recently bought a bidet, they\u0026rsquo;ve probably told you how it changed their lives like some sort of newfound religion. As someone who converted to Bidetism recently, I\u0026rsquo;m here to tell you four reasons why you shouldn\u0026rsquo;t.\nHave you ever looked at your toilet and thought it could remind you more of a nursing home? With a bidet, you can. Strap an enormous white plastic throne to your diminutive toilet. It\u0026rsquo;s got handles and buttons. It can probably help you get up and down from the toilet to save the wear and tear on your knees. Does the power ever turn off at your house, even briefly? Every time, the bidet will reset. It is only when you hit the bidet button that you realize it has forgotten the heat setting. Have you ever had Satan lick your butthole? Zero-degree water blasting right through your colon like an icicle knife is how I would describe it. With a house with multiple bathrooms, the toilets wear somewhat evenly. However, now there is suddenly only one toilet that can be used for certain activities. Imagine a singular port-a-potty at a busy construction side. Overuse can quickly become a problem. Have you ever wanted to befuddle and frighten a house guest? Perhaps it is my strange American sensibilities, but the idea of using someone else\u0026rsquo;s bidet seems strange to me. Like using someone else\u0026rsquo;s toothbrush. These aren\u0026rsquo;t the only reasons not to convert to Bidet and these are largely US-specific reasons as many other countries have a separate bidet (like sane people). To all those US-based hold-outs, stay strong and don\u0026rsquo;t be swayed by the bidet rhetoric.\n","permalink":"https://matthewc.dev/musings/bidets/","summary":"If you\u0026rsquo;ve been unlucky to converse with someone who has recently bought a bidet, they\u0026rsquo;ve probably told you how it changed their lives like some sort of newfound religion. As someone who converted to Bidetism recently, I\u0026rsquo;m here to tell you four reasons why you shouldn\u0026rsquo;t.\nHave you ever looked at your toilet and thought it could remind you more of a nursing home? With a bidet, you can. Strap an enormous white plastic throne to your diminutive toilet.","title":"Why You Shouldn't Convert to Bidet"},{"content":"No matter where you go, there\u0026rsquo;s a place in the back of the back alley open at 0am. They sell piping hot covfefe.\nMade fresh every doomsday.\n","permalink":"https://matthewc.dev/blender/covfefe/","summary":"No matter where you go, there\u0026rsquo;s a place in the back of the back alley open at 0am. They sell piping hot covfefe.\nMade fresh every doomsday.","title":"Covfefe"},{"content":"This is a gross oversimplification, but there are roughly two types of music players: classical and jazz.\nClassical knows what\u0026rsquo;s coming next. They play as a well-oiled machine that is in tune and sync. They can start and stop on a dime as directed. Jazz flows, bouncing to new places and riffing off the previous stanza. Sometimes beautiful and sometimes verging on horrid, jazz keeps time but not beat. Classical practices specific pieces and jazz practices generally.\nFor most of my life, I wanted to be a jazz player. To become a generalist that could jump into anything and play along. This applies to hobbies, coding, and other aspects of life. But I find myself thrown off when something unexpected happens. I prefer to go in with a plan and have an idea of what should happen, despite wanting to go with the flow. I used to view jazz as superior as it was versatile and fast-moving. Classic has skills in planning and can deliver a consistent, polished experience. That\u0026rsquo;s not a bad thing!\nGranted, there are people out there who can do classical and jazz. Those who can plan, follow it, and then improvise when needed. But most of us lean one way or another.\nBut it\u0026rsquo;s time to accept that I lean classical, continue to work on my jazz skills, and not beat myself up for not being both.\n","permalink":"https://matthewc.dev/musings/living-without-playing-jazz/","summary":"This is a gross oversimplification, but there are roughly two types of music players: classical and jazz.\nClassical knows what\u0026rsquo;s coming next. They play as a well-oiled machine that is in tune and sync. They can start and stop on a dime as directed. Jazz flows, bouncing to new places and riffing off the previous stanza. Sometimes beautiful and sometimes verging on horrid, jazz keeps time but not beat. Classical practices specific pieces and jazz practices generally.","title":"Living Without Playing Jazz"},{"content":"While I won\u0026rsquo;t go so far as to diagnose myself with ADD without consulting a doctor, those who know me know I can be distracted quite easily. So recently, I decided to do something a little different with a personal project I started a few weeks ago (link to come).\nI wrote unit tests.\nThat may seem rather uninspired, but I\u0026rsquo;ve always poo-pood the idea of test-driven development (TDD), as it seems wild to me to write a test for something that you don\u0026rsquo;t know what it needs to do yet. It works great when there\u0026rsquo;s a well-defined spec or an existing hole that you need to fit. TDD shines when refactoring a codebase. Treat it as a black box, document and validate the behavior, and start rebuilding the inside.\nHowever, when you\u0026rsquo;re cobbling together a project from scratch, iterating and hacking away until something decent works, writing test code to throw it all away seems like a waste.\nWhile not traditional TDD or likely not a new concept, I\u0026rsquo;ve done something I\u0026rsquo;ve dubbed TLD (Test Led Development). Rather than writing a whole smattering of tests and then coding until they all pass, TLD focuses on ping-ponging between test and code. First, you write the function header/prototype and then go and write a quick test that exercises it reasonably well. Then code the function, ensure the test case passes, and split it if needed.\nThis approach had an unexpected advantage.\nI started ending a coding session after writing a test and leaving the implementation for the next time. Later, I would sit down at my desk and try to decide what I wanted to do. Often I would do surface-level tweaks and little changes on a few projects without making significant progress on anything. When you haven\u0026rsquo;t worked on a project in a little while, it can be hard to remember what you were thinking to do next. A TODO document or project tracker may help page that context back in, but I haven\u0026rsquo;t found a system that triggers that recall I\u0026rsquo;m looking for.\nThe broken unit test stuck out like a sore thumb. I would tell myself to fix the unit tests and then go back to looking at other projects. By implementing the function, I could remember more about the codebase and where I wanted to go next. Time flew by in a blissful flow as I implemented new features, and the number of unit tests climbed higher.\nFind a bug? Write a test that exercises that path. Then bang on it until all tests pass.\nIt might go against your engineering sensibilities, but for personal projects, I\u0026rsquo;d encourage you to check in code with a single failing unit test.\n","permalink":"https://matthewc.dev/musings/unit-tests/","summary":"While I won\u0026rsquo;t go so far as to diagnose myself with ADD without consulting a doctor, those who know me know I can be distracted quite easily. So recently, I decided to do something a little different with a personal project I started a few weeks ago (link to come).\nI wrote unit tests.\nThat may seem rather uninspired, but I\u0026rsquo;ve always poo-pood the idea of test-driven development (TDD), as it seems wild to me to write a test for something that you don\u0026rsquo;t know what it needs to do yet.","title":"Fighting Distraction With Unit Tests"},{"content":"I find the 20\u0026rsquo;s to the 50\u0026rsquo;s a very fascinating time in American history. Perhaps because it is so very different from the life we have today, but still feels relatively closer. Maybe even more so because the 20\u0026rsquo;s and 30\u0026rsquo;s were so radically different from just 50 years prior.\nSo in pursuit of learning more about the history of the cinema and to understand more about what life was like by watching what they watched, I\u0026rsquo;m going to watch the top grossing/most popular movie from each year. Most popular is determined by rotten tomatoes and on a few years I\u0026rsquo;ll pick two movies.\nMany of the movies in the early years are in the public domain (archive.org and Library Of Congress are good resources). Here\u0026rsquo;s the list.\n1899 Cripple Creek Bar-Room Scene 1902 Trip to the moon 1903 Alice in wonderland 1903 The great train robbery 1904 The impossible voyage 1906 The Motorist 1908 The Dreyfus Affair 1909 A corner in wheat 1909 The Devlish Tenant 1910 Frankenstein 1911 Winsor McCay\u0026rsquo;s Moving Comics 1911 Little Nemo 1912 The Invaders 1913 Traffic in Souls 1914 Million Dollar Mystery 1914 Thou Shalt Not Kill/The Avenging Conscience 1915 The birth of a nation 1916 20,000 leagues under the sea 1917 The Poor Little Rich Girl 1918 Tarzan of the apes 1918 Micky 1919 South 1919 Daddy Long Legs 1920 Over the hill to the poorhouse 1920 The mark of zorro 1921 The Shake 1921 The Four Horsemen of the Apocalypse 1922 Robin Hood 1923 The Covered Wagon 1924 The Last Laugh 1924 Der Mude Tod (Destiny) 1925 The Gold Rush 1925 The Big Parade 1926 What Price Glory 1927 Wings 1927 the Jazz singer 1927 Sunrise: song of two humans 1928 The Singing fool 1928 Street Angel 1929 The broadway melody 1929 Un Chien Andalou 1930 Tom Saywer 1930 All quiet on the western front 1931 City Lights 1931 Frankenstein 1932 The Old Dark House 1933 King Kong 1933 Duck Soup 1934 It happened one night 1935 Top Hat 1925 Mutiny on the Bounty 1936 How to become a detective 1937 Snow White and the seven dwarfs 1938 Bringing up baby 1939 Gone with the wind 1939 Wizard of Oz 1939 Mr. Smith goes to washington 1939 Stagecoach 1939 Wuthering Heights 1940 Pinocchio 1940 Grapes of Wrath 1940 Fantasia 1941 Sergeant York 1941 Citzen Kane 1942 Bambi 1942 Casablanca 1943 Watch on the Rhine 1943 This is the army 1944 Going my way 1944 Double Indemnity 1945 The bells of St. Mary’s 1945 The Lost Weekend 1946 Song of the south 1946 It’s a wonderful life 1947 Miracle on 34th street 1947 Gentleman’s agreement 1948 Treasure of Sierra Madre 1948 The Snake Pit 1949 Samson and Delilah 1949 Third man 1950 Cinderella 1950 Sunset boulevard 1951 Quo Vadis 1951 The Day the earth stood still 1951 Streetcar named desire 1951 American in Paris 1951 African Queen 1952 The greatest show on earth 1952 Singin\u0026rsquo; in the rain 1952 From here to eternity 1951 High Noon 1953 From here to Eternity 1953 Peter Pan 1954 on the waterfront 1954 White Christmas 1955 rebel without a cause 1955 Lady and the tramp 1956 The Ten Commandments 1956 invasion of the body snatchers 1957 Bridge of River Kwai 1957 paths of glory 1958 South Pacific 1958 vertigo 1959 Ben-Hur 1959 Some like it hot 1959 North by Northwest 1960 Spartacus 1960 psycho 1960 The apartment 1961 101 Dalmatians 1961 West side story 1961 breakfast at tiffany’s 1962 Lawrence of Arabia 1962 Kill a mockingbird 1963 Cleopatra 1963 the great escape 1964 Goldfinger 1964 Dr. Strangelove 1964 My Fair Lady 1965 Sound of music 1966 the Bible 1966 Good, bad, ugly 1967 The Jungle book 1967 The Graduate 1968 2001 Space odyssey 1969 Butch Cassidy and the Sundance kid 1969 Midnight Cowboy 1970 Love story 1970 MAS*H 1971 Diamonds are forever 1971 Clockwork Orange 1971 Dirty Harry 1972 The Godfather 1973 The exorcist 1974 Towering Inferno 1975 Chinatown 1975 Jaws 1976 Rocky 1976 Taxi Driver 1977 Star Wars A new hope 1977 Close Encounters of the third kind 1978 Grease 1978 Halloween 1979 Moonraker 1979 Apocalypse Now 1979 Alien 1980 Star Wars Strikes back 1980 The Shining 1981 Raiders of lost ark 1982 ET 1982 Blade Runner 1983 Star Wars return of Jedi 1984 Indiana jones temple of doom 1984 Amadeus 1984 Terminator 1985 Back to the future 1986 Top gun 1986 Stand by me 1987 Fatal attraction 1987 Princess Bride 1988 Rain man 1988 Die Hard 1989 Indiana jones last crusade 1989 Glory 1990 Ghost 1990 Dances with Wolves 1990 Goodfellas 1991 Terminator 2 1991 Silence of the lambs 1992 Aladdin 1992 Unforgiven 1993 Jurassic Park 1994 Lion king 1994 Pulp Fiction 1995 Die hard: with a vengeance 1995 Toy Story 1996 Independence Day 1996 Fargo 1997 Titanic 1998 Armageddon 1998 Saving Private Ryan 1999 Star Wars I 1999 The Matrix 2000 Mission impossible 2 2000 Gladiator 2001 Harry Potter 1 2001 Training Day 2002 Lord of rings 2 towers 2002 The Pianist 2003 Lord of rings 3 2004 Shrek 2 2004 Kill Bill vol 2 2005 Harry Potter 4 2005 Brokeback mountain 2006 Pirates dead man’s chest 2006 Pan’s Labyrinth 2007 Pirates at world’s end 2007 There will be blood 2008 Dark knight 2009 Avatar 2010 Inception 2011 Harry potter death hallows part 2 2012 Avengers 2012 Skyfall 2013 12 years a slave 2014 Guardians of the galaxy 2015 Star wars 7 2015 Mad max: fury road 2016 Capt America civil war 2016 Moonlight 2017 Star Wars 8 2017 Get out 2018 Spiderman into the spiderverse 2019 Parasite 2021 Spider man: no way home 2022 Top gun maverick It\u0026rsquo;s several days of movies. Even just one movie a week is a few years. I might start writing about watching these movies and sort of review what I think and why they\u0026rsquo;re culturally significant.\nIf I\u0026rsquo;m missing any movies, please let me know.\nPhoto by Ricky Turner on Unsplash\n","permalink":"https://matthewc.dev/projects/movie-history-tour/","summary":"I find the 20\u0026rsquo;s to the 50\u0026rsquo;s a very fascinating time in American history. Perhaps because it is so very different from the life we have today, but still feels relatively closer. Maybe even more so because the 20\u0026rsquo;s and 30\u0026rsquo;s were so radically different from just 50 years prior.\nSo in pursuit of learning more about the history of the cinema and to understand more about what life was like by watching what they watched, I\u0026rsquo;m going to watch the top grossing/most popular movie from each year.","title":"Watching The Greatest Hits From The Last 120+ Years"},{"content":"Surveys by Common Sense Media and the Consumer Mobility Report both state a large majority Americans sleep next to their phone or with their phone in hand. According to a 2019 Statista survey , 44% of participants reported feeling some anxiety when separated from their phone (not the same thing as the smartphone itself causing anxiety ).\nThis isn\u0026rsquo;t a public decrying of how we are hopelessly addicted to our phones. This is to take a look at what you are doing with your phone and be consciously aware of what you are getting from it. In many ways, a smartphone is an adult pacifier.\nA pacifier is a little thing you stick in a baby\u0026rsquo;s mouth to calm them. It soothes the need to suck, even though the baby doesn\u0026rsquo;t need to eat. It is called a binky, wookie, dummy, soother, or dodie. It comes from as early as 1473, back when it was more of a teething toy which slowly evolved into a rag with food wrapped inside (often sugar, fat, or meat and occasionally soaked in brandy). In 1900, Christian Meinecke invented the modern version we see today with handle, mouth guard, and rubber nipple.\nSmartphones in many ways provide the same function. Feeling uncomfortable in a social situation? Pull out the phone and text a friend not there to complain about how boring it is. The Journal of Consumer Research published a study in 2020 to investigate this effect. Interestingly, the same activities were more comforting when done on a smartphone compared to a laptop. The study mentions the reassuring presence of the smartphone. In moments of stress, they found people tend to seek out their smartphone as a refuge or soothing mechanism. It provides dopamine, connection to friends we care about, and can easily drown out the thing we don\u0026rsquo;t want to deal with at the moment.\nJane Halonen and Richard Passman studied 48 one-year-olds put in a standardized playroom . Sometimes they had their mother, their pacifier, or nothing familiar at all. The researchers then looked at how the babies played before starting distressful behaviors. They found that pacifiers allowed to baby to play longer than not having anything familiar, but not as long as their mother being nearby. To apply the analogy to the smartphone, having a soothing mechanism allows you to do things that you wouldn\u0026rsquo;t do normally. Think back to the example of the party, are there parties you wouldn\u0026rsquo;t go to without your smartphone? You know you have a fallback in your back pocket if things get awkward or no one you know is there. Perhaps your smartphone helps you to de-stress on the tube ride home from the office.\nI am not a therapist by any stretch. But I believe we all have coping mechanisms and to some degree they work, otherwise we wouldn\u0026rsquo;t have them. What is more important in my opinion, is to be aware of the mechanism so that you can evaluate it\u0026rsquo;s benefits and costs. The last few weeks, I\u0026rsquo;ve been reading a book or watching a blender tutorial on my phone as I\u0026rsquo;m falling asleep. It started as just running out time in the day and having something I wanted to watch. It evolved into something to entertain myself while I became tired.\nPreviously I used to just lay down and I would fall asleep within 5 or 10 minutes. Wanting to get back to that, I started leaving my phone on the charger in the bathroom and found I felt somewhat lost and unsettled when just laying down. After just a few weeks, I was used to having some sort of thing to entertain myself as I fell asleep and it got me wondering. Not whether this was a good or bad thing, but rather was this a thing I wanted? Did I like the benefit it provided? Was I okay with the costs?\nUltimately, I decided that the iPhone was too interesting to fall asleep effectively but I liked reading (if I\u0026rsquo;m reading a boring book), so I\u0026rsquo;m going to get a kindle and keep up the habit of reading while falling asleep. It\u0026rsquo;s an interesting way to think about our phones and our relationship with them.\n[Photo by charlesdeluvio on Unsplash] [Photo by Zeesy Grossbaum on Unsplash] Photo by Eddy Billard on Unsplash\n","permalink":"https://matthewc.dev/musings/adult-pacifier/","summary":"Surveys by Common Sense Media and the Consumer Mobility Report both state a large majority Americans sleep next to their phone or with their phone in hand. According to a 2019 Statista survey , 44% of participants reported feeling some anxiety when separated from their phone (not the same thing as the smartphone itself causing anxiety ).\nThis isn\u0026rsquo;t a public decrying of how we are hopelessly addicted to our phones.","title":"The Adult Pacifier"},{"content":"I have lots of laptops and one desktop. While jumping around the country, it\u0026rsquo;s annoying to have to setup a dev environment on every machine. There are solutions out there like docker, github codespaces, or renting a VM from one of the dozens of cloud providers.\nBut I already have a great desktop with everything I want already setup. I work on tons of different projects (most don\u0026rsquo;t see enough work to warrant a writeup) and it\u0026rsquo;s annoying to have to provision the different environment. Also show me an environment that can do Blender, KiCad, Web Dev, AI workloads (GPT-2 finetuning/Stable Diffusion), and writing all in a tidy package for a reasonable monthly price.\nThe Setup In a nutshell, I wake the server in my office remotely and then connect to it with a low latency app called Parsec .\nIt\u0026rsquo;s geared towards gaming but it can be used for whatever (and supports tablet pen pressure for Blender, though I think you have to upgrade to Warp).\nBut to be accessible, the machine needs to be online. The solution is to use wake on magic packet . It causes me a lot of pain in my work life so I figure I should use it for something in my personal life. To send a magic packet, you basically just send a special packet to the address and the network hardware sees it and asserts an interrupt. ACPI or the like passes it up to the hardware and suddenly you\u0026rsquo;re up and running again.\nHowever, my desktop sits behind my network and I\u0026rsquo;d like to think that randos out on the internet can\u0026rsquo;t send arbitrary packets at my machine. So I need to tell another machine on my network to send the packet for me.\nIn comes homebridge. Specifically the homebridge-wol plugin by Alex Gustafsson.\nWith a bit of configuration, I just open the home app and turn the desktop on, wait a few seconds and it shows up in parsec.\nIt works? So far it\u0026rsquo;s been incredibly reliable.\nThere has only been one time that it didn\u0026rsquo;t work and it was because we briefly lost power while I was out of town so the machine was out. A quick change in the UEFI will make sure that on power loss, the machine will turn back on when it\u0026rsquo;s connected to power.\nI was in in LA over the weekend and in my AirBnb with a 12mbps connection (though it was relatively stable), I was able to connect and enjoy some programming and a bit of light gaming with a friend (Valhiem if you\u0026rsquo;re really curious).\nAlternatives I think I could set up an working setup with tailscale and VS Code\u0026rsquo;s SSH remote working extension, but the advantage of Parsec is that it supports Blender and gaming. So the current setup is probably the best choice.\n","permalink":"https://matthewc.dev/musings/ode-to-parsec/","summary":"I have lots of laptops and one desktop. While jumping around the country, it\u0026rsquo;s annoying to have to setup a dev environment on every machine. There are solutions out there like docker, github codespaces, or renting a VM from one of the dozens of cloud providers.\nBut I already have a great desktop with everything I want already setup. I work on tons of different projects (most don\u0026rsquo;t see enough work to warrant a writeup) and it\u0026rsquo;s annoying to have to provision the different environment.","title":"I Love Parsec + HomeKit"},{"content":"Since I hate sites that make you read a whole story to get the recipes, out of the five different ice creams, here were the two highest ranking ones. I\u0026rsquo;d recommend reading more about these recipes\nSilk Ice Cream 1/2 cup granulated sugar 1/4 tsp xantham gum 2 tbsp light corn syrup 1 1/3 cups + 1 tbsp of Silk Heavy Whipping Cream Alternative 1 cup - 1 tbsp of Oat Milk (you can substitute other milks, but do your math to recalculate the fat, see notes below) Combine sugar and xantham gum Put corn syrup in medium pot and stir in oat milk. Add sugar mixture and whisk until smooth. Set pot over medium heat and cook, stirring to prevent a simmer, until sugar is dissolved (3 minutes). Remove pot from heat. Add in cream alternative and whisk until combined. Let the base cool, transfer to airtight container and refrigerate at least 6 hours, but preferably 24 hours. It will last two weeks in the fridge and 3 months in freezer. Makes about 3 cups.\nSalt \u0026amp; Straw Coconut base 1/2 cup unsweetened shredded Coconut 1/2 cup light brown sugar (lightly packed) 1/4 cup granulated sugar 1/2 tsp xantham gum 3/4 cup light corn syrup 2 1/2 cups unsweetened coconut cream (Aroy-D and boxed, not canned) Heat oven to 300F Sprinkle shredded coconut in even layer on sheet pan \u0026amp; bake, shaking occasionally about 5 minutes until coconut is even dark amber in color Meanwhile, stir brown and granulated sugar together with xantham gum. Combine toasted coconut, corn syrup, and 1 cup water in small saucepan. Add in sugar and whisk until smooth. Set pan over medium heat and cook, stirring often and adjusting heat to prevent a simmer, until sugar is dissolved (3 minutes). Remove pan from heat and stir in coconut cream. Let the base cool, transfer to airtight container and refrigerate at least 6 hours, but preferably 24 hours. It will last two weeks in the fridge and 3 months in freezer. Strain before using. It will be very thick, use a rubber spatula to force it through a fine mesh strainer or sieve. Makes about 4 cups\nHow To Use These Recipes Neither of these recipes have salt or vanilla extract. They are bases that can be used to make any flavor and I expect you to tweak the salt in it accordingly. If you\u0026rsquo;re going for vanilla, just add about 2 tsp of homemade vanilla extract or vanilla bean paste. If you\u0026rsquo;re adding in more sweetness, you\u0026rsquo;ll need more salt to pair things back. The second recipe will be slightly sweeter than the first and have a much stronger coconut taste. The first will not really have much of an aftertaste and you can get creative mixing in cashew milk into the oat milk (make sure to mind the fats).\nThe first one is slightly tweaked as the fat in whole milk and HEB original oat milk are not the same. Salt \u0026amp; Straw\u0026rsquo;s normal base has 2 2/3 cups of liquid and (106 + 10.6 = 116.6g of fat total). In our vegan version we have the same amount of liquid, but we add in an extra tbsp of the heavy stuff to offset the missing 5 grams of fat.\nThere are four things you really need to mind when making ice cream:\nSugar Fat Water Salt There are calculators and resources out there to balance these things. But these factors affect texture, hardness, softness, and taste. Things can get really complicated if you add in other factors like lactose is a sugar but it\u0026rsquo;s not as sweet gram-per-gram as granulated sugar. Salt and alcohol depress the freezing point of the mixture, which changes the curve of the ice cream tempature. This is a whole rabbit hole that I\u0026rsquo;ve peered down and sort of guessed at. Feel free to read up on some great resources to help wrap your head around it.\nThe Taste test 6 testers. 5 ice creams. 2 two store-bought brands.\nAll ice creams were cookies and creme with gluten-free oreos.\nUltimately the two store bought brands were by and far better than the lowest performer (which was universally hated). The lowest was just coconut milk, corn syrup, and corn starch .\nThe two store bought ones were the HEB store brand and a brand called Bellefontaine. Reading the comments, many people noted that the store bought brands tended to be sweeter and not be as hard. All the homemade ice creams froze very hard (likely due to them having less sugar ).\nWhy Make Vegan Ice Cream? I\u0026rsquo;m not vegan. I just think vegan things are really interesting. In some ways, making regular ice cream isn\u0026rsquo;t as exciting because it\u0026rsquo;s going to be good every time. It\u0026rsquo;s like playing a video game where you can\u0026rsquo;t lose. Great for relaxing, but not very stimulating.\nWhy not challenge yourself?\nLooking Forward I want to revisit this in the future and tweak a few things. Make smaller batches. Add in a vegan milk powder. Etc.\nSo perhaps there will a follow onto this article.\n","permalink":"https://matthewc.dev/projects/vegan-ice-cream/","summary":"Since I hate sites that make you read a whole story to get the recipes, out of the five different ice creams, here were the two highest ranking ones. I\u0026rsquo;d recommend reading more about these recipes\nSilk Ice Cream 1/2 cup granulated sugar 1/4 tsp xantham gum 2 tbsp light corn syrup 1 1/3 cups + 1 tbsp of Silk Heavy Whipping Cream Alternative 1 cup - 1 tbsp of Oat Milk (you can substitute other milks, but do your math to recalculate the fat, see notes below) Combine sugar and xantham gum Put corn syrup in medium pot and stir in oat milk.","title":"Finding The Best Homemade Vegan Ice Cream"},{"content":"I have a friend who claims food delivery is too expensive and would just go and pick it up themselves. To me, if I\u0026rsquo;m hot and tired from working on some house thing, I love just hitting a button and hopping in the shower knowing some kind soul will deliver food to my door by the time I\u0026rsquo;m out. Ethical concerns about food delivery companies aside (I try to tip well), it\u0026rsquo;s something I value more than my friend does. On the flip side, my friend owns multiple Nintendo Switches.\nMultiple.\nI\u0026rsquo;m not here to put my friend down by saying your shouldn\u0026rsquo;t collect video game consoles but my point is that we value things differently and we should take that into account financially.\nSome of the best financial advice I\u0026rsquo;ve read is to be selectively rich.\nDecide what things are important to you. A good rule of thumb would be things you\u0026rsquo;re already willing to spend more money on. The next trick after identifying what things you care about is to aggressively limit the number of things you choose to be rich with.\nPerhaps it is eating out. Perhaps it is home improvements. Or collecting video game consoles.\nThe key takeaway is that you cannot be rich all the time. But choosing to be poor all the time to maximize savings isn\u0026rsquo;t a very fun way to live (if you do that, more power to you).\nChose two things your life that you want to spend less on and one thing that you want to spend more on. Will your overall financial outcome change? Probably not by much.\nBut if you do it right, your life will feel all that much richer.\nImage from Mathieu Stern on unsplash\n","permalink":"https://matthewc.dev/musings/selective-rich/","summary":"I have a friend who claims food delivery is too expensive and would just go and pick it up themselves. To me, if I\u0026rsquo;m hot and tired from working on some house thing, I love just hitting a button and hopping in the shower knowing some kind soul will deliver food to my door by the time I\u0026rsquo;m out. Ethical concerns about food delivery companies aside (I try to tip well), it\u0026rsquo;s something I value more than my friend does.","title":"Selectivity Rich"},{"content":"We recently moved and my wife had a request. She wanted something tall to change the visual weight of the room. I agreed with her (she has a vision and who am I to stand in her way?) and she sent me this: It\u0026rsquo;s a nice piece of furniture and solid wood. The only problem is that it\u0026rsquo;s 200$ USD and backordered for 4 months. This article was written in early 2022 if you can\u0026rsquo;t tell (hopefully you\u0026rsquo;re reading this and supply chain issues are less of a worry).\nI thought, how hard could that be to make myself? Rockler sells some nice 1\u0026quot; (25mm) rods that I could buy and cut. Except they only had one 1\u0026quot; rod but plenty of 3/4\u0026quot; (19mm) rods. So 3/4\u0026quot; it was. Looking back, I wish I had ordered/waited for the 1\u0026quot; ones as they would have looked better. Strength wise, the 3/4\u0026quot; ones are just fine.\nI found some old wood in the garage that was nice and long but only 3/4\u0026quot; thick. I face jointed them and planed them until they were smooth enough to glue together. After clamping them up, I ripped it down the middle to form two squares, each a little under 1.5\u0026quot; (38mm) thick.\nSince my garage woodworking setup doesn\u0026rsquo;t have a drill press (yet) I got out my square and did my best to just drill with my 3/4\u0026quot; bit with a piece of tape to mark the depth. The holes weren\u0026rsquo;t perfect but they worked alright. If I were to do this again, I\u0026rsquo;d invest in a drill press.\nCutting the rods involved a little math. Using paint.net I was able to guess that the angle was around 2 degrees, so that\u0026rsquo;s what I went for. I measured the holes I drilled and then did some geometry. Out of all the high school subjects I hated, geometry has been the most useful. Though I hated the geometric proofs and have never needed them.\nSince I wanted the outside rails to be round, I rounded the edges that the holes in them as I wouldn\u0026rsquo;t be able to round them once it was glued up. A massive 3/4\u0026quot; roundover bit did the trick but made a huge mess. I left the outside edges not round as I wanted to have two nice flat surfaces for glue up.\nImage not found rounding_test.jpg : image alt text Then it was time for glue up. I cut little blocks at 2 degrees to make sure the angles were correct. Getting it straight was also a challenge and I feel like it has a slight tilt to it Next time I\u0026rsquo;ll cut a 2 degree angle in a long piece of wood and have that as a reference edge in clamping on both sides. I was adjusting as the glue dried and ran out of time as it got hard and harder to make small adjustments. ß After it dried it was time to test it against the wall.\nThe final step was rounding the two outside edges and finishing. Since it was poplar on the outside, I wasn\u0026rsquo;t sure how well it would take the stain. I grabbed my test piece from earlier and wiped some stain on to see what it would look like. I was debating painting it, but wanted to see what it would look it.\nUltimately I didn\u0026rsquo;t love the look of the stained poplar and oak together, so I decided to go for a two toned approach with paint.\nImage not found stain_test.jpg : image alt text Painting I first sprayed it with some primer to get good adhesion.\nThen I painted several coats on the outside with some enamel paint I had laying around for a nice hard finish. I sealed the rungs with some general purpose sealer.\nOverall I think it turned out really good.\n","permalink":"https://matthewc.dev/projects/blanket-ladder/","summary":"A custom ladder that stores blankets for our family room","title":"Blanket Ladder"},{"content":"I love a todo list. It captures my goals and desires in a semi-succinct and concrete way. There are hundreds if not thousands of todo apps and methods. But to me personally, one stands out above all the others.\nA simple 3x5 notecard.\nThis is a how-to or a help article. This is a love letter.\nA love letter to paper.\nThe simple idea of writing down your list of things to do on a physical surface with your own hand is something I love. There\u0026rsquo;s something very physical about it that typing out a list or keep track of tasks in a gantt chart or burndown pile doesn\u0026rsquo;t quite match. And more importantly, it takes effort on your part to get rid of a paper todo list. It isn\u0026rsquo;t something that be easily deleted like it never existed.\nThis isn\u0026rsquo;t me just hating on electronic todo lists. There are many fantastic app that sync across devices and offer alerts and all sorts of customization. And I think they\u0026rsquo;re great tools that I wish I was able to use better.\nBut when you add a reminder to your list of things to do, it disappears. Not in a literal sense but a cognitive sense. The screen quickly shows something else and in my mind the list is conceptually pushed back to the ones and zeros it really is. Even with alerts and noises all, it is easy to forget a digital todo list.\nAdditionally the fact that you\u0026rsquo;ve used your muscles to write out your list helps you remember the list better .\nHowever, a simple 3x5 notecard has its own share of issues. It\u0026rsquo;s hard to edit tasks or break tasks apart. Culling duplicates requires effort. And communing ruins the whole thing if you left your todo card at home. Not to mention the physical waste it generates.\nI\u0026rsquo;ll talk about my particular todo methodology in a future musing, but for today, I just want to declare my appreciate.\nPhoto by Kelly Sikkema on Unsplash\n","permalink":"https://matthewc.dev/musings/paper-todos/","summary":"I love a todo list. It captures my goals and desires in a semi-succinct and concrete way. There are hundreds if not thousands of todo apps and methods. But to me personally, one stands out above all the others.\nA simple 3x5 notecard.\nThis is a how-to or a help article. This is a love letter.\nA love letter to paper.\nThe simple idea of writing down your list of things to do on a physical surface with your own hand is something I love.","title":"An Ode To Paper Todos"},{"content":"I came across this little GIF of someone carving a little burger boy on imgur . As far as I can\u0026rsquo;t, the poster of this GIF isn\u0026rsquo;t the original carver. Doing a quick google doesn\u0026rsquo;t show any burger boys that are similar to this. Hopefully I\u0026rsquo;m not stealing some beloved character. It\u0026rsquo;s just a thing I thought was cool.\nReference:\nFirst Attempt The lighting is way too much and I need to add some texture onto the patty. I also need to make his legs a little shorter. The automatic weights on the armature didn\u0026rsquo;t turn out quite right. I also realized that he has a little zig zag pattern on the sides of his shoes.\nI\u0026rsquo;d like to animate a little walk cycle where the burger bits sort of flop around, making an armature hard to do. I\u0026rsquo;m still sort of a blender novice so I\u0026rsquo;m slowly learning. This leans heavily on the sub surf modeling method, which gives me a soft appearance.\nThe bun at the top has seeds scattered as particles, with some weight painting to make them appear just towards the top.\nSecond Attempt TBC\nQuestion You\u0026rsquo;ve made it this far. Do you think I should post the blender files for these? I\u0026rsquo;m worried about the file sizes being too large for git. The Burgerboy file is 25Mb for example (though a lot of that is the fact that I applied all the modifiers to a duplicate of all the pieces to see the effect). If there are requests, I\u0026rsquo;ll start doing that. There\u0026rsquo;s a link at the top to suggest changes (you\u0026rsquo;ll need a github account).\n","permalink":"https://matthewc.dev/blender/burger-boy/","summary":"I came across this little GIF of someone carving a little burger boy on imgur . As far as I can\u0026rsquo;t, the poster of this GIF isn\u0026rsquo;t the original carver. Doing a quick google doesn\u0026rsquo;t show any burger boys that are similar to this. Hopefully I\u0026rsquo;m not stealing some beloved character. It\u0026rsquo;s just a thing I thought was cool.\nReference:\nFirst Attempt The lighting is way too much and I need to add some texture onto the patty.","title":"Burger Boy"},{"content":"You might have heard the future of cars are electric. Here\u0026rsquo;s a shortlist from mashable that includes all the car manufacturers going electric only.\nBentley - 2030 Jaguar - 2030 GM - 2035 Volvo - 2030 Ford - 2026 (EU only) Volkswagen - 2026 Toyota - 2040 Mercedes - 2035 This list means they are pledging to no longer produce cars that have combustion engines. Many other auto manufacturers are pledging to sell mostly electric vehicles and sell a small number traditional gas cars.\nHowever, going forward there are a few problems.\nCharging infrastructure Roads without gas taxes Lithium supply Gas prices without scale Perhaps a better solution might be moving towards plug-in hybrid electric cars rather than electric only. But first, let\u0026rsquo;s talk about some of the issues with electric only.\nCharging Electric cars draw huge amounts of power. Everyone running their A/C in the summer during a heat wave can cause rolling blackouts, imagine everyone plugging in their car. In some ways, electric cars can help as during high loads some cars (mainly Ford) can dump energy back into the grid, acting as a grid stabilizer. Power generation can scale, we can built more power plants, have bigger batteries , and generally scale that up. What is far harder and expensive is to scale up power delivery. We hear all the time that here in the US our infrastructure is crumbling. While I don\u0026rsquo;t have any hard data on that state of the grid as a whole, I think we can agree that it is expensive to maintain and every american household charging their car would put a strain on cars.\nRoads Roads are incredibly expensive. So expensive that I wrote a whole piece on it though it is a bit lengthy. Roads are massively expensive and we as the public don\u0026rsquo;t open think about it. In most cases, cities cannot fund roads as most of the property taxes you pay go to schools, libraries, and the state. Your city might only get a few hundred dollars a year from your home or apartment complex. Your share of road maintenance (depending on how much you drive) could easily reach the tens of thousands. Gax taxes are one way to capture funding for infrastructure and we rely on them to the degree that some states have fees for electric vehicles to try and capture the taxes from what you would have spent on gas. Federal gas taxes maintain interstates and state gas taxes try to do the same for local roads and highways. Some states (such as Washington, Maine, and Nevada) are experimenting with Road Use Charges (RUCs) which assesses extra taxes at registration time for the amount of miles you have driven. This might be more appealing than paying a large several thousand dollar lump sum when you buy an electric vehicle to some, but it isn\u0026rsquo;t ideal when many forces are pushing so hard for the public to buy more electric.\nLithium Hackaday has an excellent writeup about lithium, how it is turned into batteries, and if we have enough but the short answer is yes and no. We definitely have enough to give everyone a car on the planet but no we don\u0026rsquo;t have enough that\u0026rsquo;s easily accessible. To have enough, we\u0026rsquo;d need to perfect new technology for extraction and recycling that currently don\u0026rsquo;t exist. However, looking at oil, as demand grew, we found new techniques that increased production and economic viability. So it is entirely possible there. Rare earth materials such as cobalt are a large problem. Half the entire world\u0026rsquo;s supply is DR Congo, which is an infamously exploited area of the world. When you need to certify that your battery is likely child slavery free, maybe it is time to rethink a few things. The amount of cobalt in each battery has been steadily declined and new batteries such as the LFP in the Tesla Model 3 are cobalt free but moving the world to cobalt-free is a long journey.\nGas Prices There will be a pain point in between gas and electric. When half the nation has electric and the other half has gas, what happens to gas stations when half their demand dries up? What happens to gasoline producers? Some might close and some might cut back production. The artificially low prices of gas that Americans have enjoyed for decades will continue to rise as it becomes less of a commodity and more of a luxury good. Obviously that is from a large macro scale and a prediction that might not hold true. Diesel is used primarily by trucks and it usually isn\u0026rsquo;t that much more than gas and fairly readily available. Additionally, I suspect that cities will more likely to electrify before rural communities, so the gas pump in your small town might go anywhere anytime soon.\nBut if we continue on this path, there will come when a traditional ICU car will run out of gas in town and there won\u0026rsquo;t be a gas pump for miles.\nThe Case For Plugins None of these things are insurmountable obstacles in the way of our march towards electrification. They can be overcome. But let me explain how a plugin hybrid (PHEV) might alleviate some of these issues.\nFor starters, the term PHEV has a bit of a range. It can be a traditional car with a small motor and battery for an assist around town. Or it can be a full electric car with a small gas engine as a generator.\nBetween the two I prefer the latter but we\u0026rsquo;ll get to that.\nThe basic idea is that for going around town you can use the battery and on road trips you use the gas. The average american drives around 37 miles a day, which is an average electric only range for PHEVs.\nWhile I\u0026rsquo;m having trouble finding good data about the distribution of miles driven on a highway vs city driving., this chart from 2009 shows that trips that are 31 miles or more make up around 5% of trips that americans take. Could you take multiple trips in one day? Yes. But the point I\u0026rsquo;m making is that if you just upgraded every american to an PHEV, you could cut the average american\u0026rsquo;s gas mileage from 14,263 to just 700 miles (just one or two roadtrips a year). You could capture 95% of the benefit of electric cars in terms of emissions for 20% of the lithium.\nYou might point out that we are using almost the same amount of power and the electrical grid will still need augmentation. I\u0026rsquo;ll counter that you\u0026rsquo;re not charging huge batteries up and a PHEV battery can fill up on a charger in just an hour or two versus the four or five that a full EV takes. This reduces the number of concurrent grid users, which reduces strain.\nGas prices are still an issue but as there is still some need, it might stay a commodity for longer. With an electric vehicle on roadtrips, you need to stop at superchargers, which per mile, are often close to or more than what gas costs , though this might change in the future.\nLocal roads are still an issue since you are buying 95% less gas but the gas you buy to drive on highways and freeways can still go towards those roads.\nSo when you\u0026rsquo;re thinking about a new car, think about a PHEV. I got a fully electric vehicle for my last car purchase and I love it, but I think next time I might look a little more closely at PHEVs.\n","permalink":"https://matthewc.dev/musings/plugin-hybrid-future/","summary":"You might have heard the future of cars are electric. Here\u0026rsquo;s a shortlist from mashable that includes all the car manufacturers going electric only.\nBentley - 2030 Jaguar - 2030 GM - 2035 Volvo - 2030 Ford - 2026 (EU only) Volkswagen - 2026 Toyota - 2040 Mercedes - 2035 This list means they are pledging to no longer produce cars that have combustion engines. Many other auto manufacturers are pledging to sell mostly electric vehicles and sell a small number traditional gas cars.","title":"Plugin Hybrid Future"},{"content":"You might remember the craze that was Great British Bakeoff in its heyday. There was discussion around the water cooler at work about each episode, parodies at SNL, and several spinoff series. While I personally feel that the hype has died down somewhat over the years as it\u0026rsquo;s been going for over 10 years.\nWhile watching the most recent season, I noticed something interesting. There were three finalists and they were all incredible bakers in their own right. Great British Bakeoff (now abbreviated as GBB) prides itself on having amateur bakers as contestants. Forgive me earlier winners, but I don\u0026rsquo;t remember the bakers in earlier seasons being as good. The difference between the finalists of the season 1 (2010) and the season 12 (2021) are stark in terms of quality and presentation. I\u0026rsquo;ll have to take the judges word for it that they taste good as well.\nI think it is not too far of a stretch to say that the contestants of twelfth season are able to produce much better looking results than the first season.\nIs this because the show has become more popular and more people apply?\nOr is it because the show has become such a cultural icon that more people are baking and it has raised the quality of the average baker in Britain?\nIt could be a combination of both or some other factor. It\u0026rsquo;s interesting to think of a thing becoming so popular that it affects itself.\n","permalink":"https://matthewc.dev/musings/great-british-effect/","summary":"You might remember the craze that was Great British Bakeoff in its heyday. There was discussion around the water cooler at work about each episode, parodies at SNL, and several spinoff series. While I personally feel that the hype has died down somewhat over the years as it\u0026rsquo;s been going for over 10 years.\nWhile watching the most recent season, I noticed something interesting. There were three finalists and they were all incredible bakers in their own right.","title":"Great British Effect"},{"content":"I hate passwords. Not as a user as password management is basically solved with most modern browsers and password managers. What I hate is having to deal with them as a developer. Hashing, storing, authentication, etc.\nI did a small project recently using my socket.io synced vuex state and needed a system where users could easily login. I will put a huge disclaimer on this that this is just what I did for my personal project where security isn\u0026rsquo;t critical. If a login gets stolen, it\u0026rsquo;s to a silly game that my friends and I play. The techniques described shouldn\u0026rsquo;t be used in production without some refinement. If you have ideas on how to implement this in a more secure way, definitely reach out to me!\nTLDR User creates an account with just their email or can create a temporary account. Their session lasts for a long time (I think a month). If it expires or they try to login from a different browser, they get a code to their email as a one-time password. It\u0026rsquo;s a great solution for a simple site that doesn\u0026rsquo;t get much traffic.\nCraigslist Slack also does something similar with their magic links.\nServer Setup I\u0026rsquo;m using express as a server, so I\u0026rsquo;ll put that out there as a baseline. I\u0026rsquo;m also using TypeScript, because why would you not use it? Setting up my server I have a controller type file that I can pass in. So here\u0026rsquo;s my server file:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 // SERVER CODE import express from \u0026#34;express\u0026#34;; import bodyParser from \u0026#34;body-parser\u0026#34;; import path from \u0026#34;path\u0026#34;; import http from \u0026#34;http\u0026#34;; import AuthController from \u0026#39;./controllers/auth\u0026#39;; // configure the app and folder locations const app = express(); // Reset the database every time we start the database const db = GetDB(); db.connect(); // Serve static content const server = http.createServer(app); AuthController(app, db); app.use(express.static(client_folder)); app.get(\u0026#34;/api/*\u0026#34;, (req,res)=\u0026gt;{ res.status(404).send(\u0026#34;NOT FOUND\u0026#34;); }) app.post(\u0026#34;/api/*\u0026#34;, (req,res)=\u0026gt;{ res.status(404).send(\u0026#34;NOT FOUND\u0026#34;); }) app.get(\u0026#39;*\u0026#39;, (req, res) =\u0026gt; { res.sendFile(path.resolve(client_folder, \u0026#39;index.html\u0026#39;)); }); server.listen(app.get(\u0026#34;port\u0026#34;), () =\u0026gt; { console.log( \u0026#34;App is running at http://localhost:%d in %s mode\u0026#34;, app.get(\u0026#34;port\u0026#34;), app.get(\u0026#34;env\u0026#34;) ); console.log(\u0026#34;Press CTRL-C to stop\\n\u0026#34;); }); It\u0026rsquo;s trimmed down a but you can get the idea of where I\u0026rsquo;m going. This is me hand trimming my code down, so don\u0026rsquo;t expect to copy and paste this and get off to the races.\nYou might notice I have something called the auth controller. The auth controller is where the magic happens.\nRoutes To start, there are a few routes that I\u0026rsquo;ve setup.\napi/login_temporary api/login_magic api/login api/logout Here\u0026rsquo;s how they\u0026rsquo;re setup:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 import { Express } from \u0026#34;express\u0026#34;; export default function RegisterEndPoints(app: Express, db: DataBase) { app.post(ApiEndpointRoot + ApiEndpoints.LOGIN_TEMP, async (req, res) =\u0026gt; { // ... }); // magic link login app.get(ApiEndpointRoot + ApiEndpoints.LOGIN_MAGIC, async (req, res) =\u0026gt; { // ... }); // Attempt to login a user app.post(ApiEndpointRoot + ApiEndpoints.LOGIN, async (req, res) =\u0026gt; { // ... }); // check if we\u0026#39;re logged in app.use(async (req, res, next) =\u0026gt; { // ... }); app.get(ApiEndpointRoot + ApiEndpoints.LOGOUT, (req, res) =\u0026gt; { // clear the login token res.clearCookie(\u0026#39;token\u0026#39;); res.redirect(\u0026#34;/\u0026#34;); }); } I\u0026rsquo;ve taken out some of the code for brevity. In this project, the client and server are in the same repo and are built together. They have a folder called common that includes the state machine that powered the game, API endpoint definitions, and common types. Makes it really handy to make sure that the server and the client won\u0026rsquo;t get out of sync from a development standpoint, since typescript catches a lot of things. Doesn\u0026rsquo;t make it fool proof (browser caches can be tricky for weird bugs), but it solves a lot of problems as projects get larger.\nThere\u0026rsquo;s a few helper functions, mostly around reading and writing the JWT token.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 export function DecodeJwtToken(token: string): JwtUser | null { const results = (JwtDecode(token) as any); if (results == null) return null; const user: JwtUser = { name: results.name, _id: results._id, temporary: results.temporary, }; return user; } function GiveToken(token_user: JwtUser, res: any, message: string, temporary?: boolean) { if (temporary == undefined || temporary == null) temporary = false; const expireInHours = temporary ? 24 : 10000; // about a year const token = JwtSign(token_user, JWT_SECRET, { expiresIn: expireInHours + \u0026#39;h\u0026#39; }); res.cookie(\u0026#39;token\u0026#39;, token, { maxAge: 1000 * 60 * 60 * expireInHours, secure:true }); if (message != \u0026#39;\u0026#39;) { res.json({ token, message }); } } function GenerateMagicCode() { const magic_key_length = 25; const characters = \u0026#39;ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789\u0026#39;; const charactersLength = characters.length; let result = Array(magic_key_length).fill(\u0026#39;\u0026#39;).map((x)=\u0026gt;characters.charAt(Math.floor(Math.random() * charactersLength))).join(\u0026#39;\u0026#39;); return result; } There are three functions: one to decode a token, one to give a token, and one to generate a magic code. The token is just stored in a browser cookie named token. In the future, it would be nice to have some sort of browser specific fingerprint encoded in the token. Or some other mechanism to prevent cookies from being stolen from the browser and used. Perhaps in the future, some sort of refresh token mechanism could be employed. Right now a session will last a very long time. In the future, there could be a refresh dance that isn\u0026rsquo;t often transmitted (maybe in localstorage or something).\nBack to the endpoints. First the temporary login.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 export default function RegisterEndPoints(app: Express, db: DataBase) { // ... app.post(ApiEndpointRoot + ApiEndpoints.LOGIN_TEMP, async (req, res) =\u0026gt; { try { const new_user_data: User = { email: \u0026#39;\u0026#39;, name: RandomName(), temporary: true, } let new_user = await db.userAdd(new_user_data); if (new_user == null) { res.status(500).send(\u0026#34;Unable to create temporary user\u0026#34;); return; } const token_user: JwtUser = { _id: new_user._id, name: new_user.name, temporary: true, }; GiveToken(token_user, res, \u0026#34;Created new temp account\u0026#34;, true); return; } catch (e) { console.error(\u0026#34;LoginUserTemp error:\u0026#34; + e); res.status(500).send(\u0026#34;Not implemented\u0026#34;); } }); // ... Basically, we generate a new user in the database, marking them as temporary. Any account that is marked as temporary and is more than 36 hours old is cleaned out of the database. We give them a token that only lasts 24 hours and there is no way to upgrade to a permanent account.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 export default function RegisterEndPoints(app: Express, db: DataBase) { // ... // Attempt to login a user app.post(ApiEndpointRoot + ApiEndpoints.LOGIN, async (req, res) =\u0026gt; { if (req.body[\u0026#39;email\u0026#39;] == undefined) { res.status(300).send(\u0026#34;Email missing\u0026#34;); return; } const email = req.body[\u0026#39;email\u0026#39;]; if (req.body[\u0026#39;email\u0026#39;] == \u0026#39;\u0026#39;) { res.status(300).send(\u0026#34;Email blank\u0026#34;); return; } const valid_email = validateEmail(email); if (!valid_email) { res.status(300).send(\u0026#34;Email is not valid\u0026#34;); return; } let user = await AttemptLoginOrRegister(db, email); if (user == null) { res.status(300).send(\u0026#34;Unable to create new account\u0026#34;); return; } if (user == \u0026#39;email\u0026#39;) { // tell the user to check their email res.send(\u0026#34;Check email\u0026#34;); return; } const token_user: JwtUser = { _id: user._id, name: user.name, temporary: user.temporary || false, }; GiveToken(token_user, res, \u0026#34;created user\u0026#34;); }); Here we expect a post request that expects an email inside. We validate the email (you\u0026rsquo;ll need to provide this function) and then call AttemptLogin. If the user already exists, we return back email which is a dumb design that tells us the account already exists. Otherwise, a new user is created.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 // Attempt to login a given email, if they already exist then async function AttemptLoginOrRegister(db: DataBase, email: string): Promise\u0026lt;DbUser | null | \u0026#39;email\u0026#39;\u0026gt; { try { if (email == \u0026#39;\u0026#39;) return null; // Step 1: check if the user already exists, if so return email const user = await db.userFind(email, null); // The user exists, set their magic code and return if (user != null) { // TODO: generate a magic thing and set it into their user const magic_code = GenerateMagicCode(); user.magicCode = magic_code; sendMagicCodeEmail(user, magic_code); console.log(\u0026#34;http://localhost:3000\u0026#34;+ApiEndpointRoot+ApiEndpoints.LOGIN_MAGIC+\u0026#34;?code=\u0026#34;+magic_code+\u0026#34;\u0026amp;id=\u0026#34;+user._id); await db.userUpdate(user); return \u0026#39;email\u0026#39;; } const name_parts = email.split(\u0026#39;@\u0026#39;); const name = name_parts[0]; // Step 2: the user doesn\u0026#39;t exist so we need to create them const new_user_data: User = { email, name, } let new_user = await db.userAdd(new_user_data); if (new_user == null) return null; return new_user; } catch (e) { console.error(\u0026#34;AttemptLoginOrRegister error:\u0026#34; + e); return null; } } In a nutshell, if they try to login in, we create a magic code in the database which gets sent to their email. Otherwise, if it\u0026rsquo;s a unique email, create a new account and sign them in. By default their username is the first part of the their email. However, usernames are not unique, emails are.\nHere\u0026rsquo;s the route for logging in with a magic code.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 export default function RegisterEndPoints(app: Express, db: DataBase) { // ... // magic link login app.get(ApiEndpointRoot + ApiEndpoints.LOGIN_MAGIC, async (req, res) =\u0026gt; { if (req.query[\u0026#39;code\u0026#39;] == undefined) { res.status(300).send(\u0026#34;Code missing\u0026#34;); return; } if (req.query[\u0026#39;id\u0026#39;] == undefined) { res.status(300).send(\u0026#34;id missing\u0026#34;); return; } const id = parseInt(req.query[\u0026#39;id\u0026#39;].toString()); const user = await db.userFind(null, id); if (user == null) { res.status(300).send(\u0026#34;user not found\u0026#34;); return; } const magic = req.query[\u0026#39;code\u0026#39;]; const curr_magic = user.magicCode; // erase the magic code if (user.magicCode != \u0026#39;\u0026#39;) { user.magicCode = \u0026#39;\u0026#39;; db.userUpdate(user); } // check if they don\u0026#39;t have a magic code if (curr_magic == null || curr_magic == undefined || user.temporary || curr_magic == \u0026#39;\u0026#39; || magic !=curr_magic) { res.status(300).send(\u0026#34;Magic code doesn\u0026#39;t match\u0026#34;); // TODO: erase magic code? return; } const token_user: JwtUser = { _id: user._id, name: user.name, temporary: user.temporary || false, }; res.status(200) GiveToken(token_user, res, \u0026#34;\u0026#34;); //res.send(\u0026#34;\u0026lt;script\u0026gt;window.location.replace(\u0026#39;/\u0026#39;);\u0026lt;/script\u0026gt;\u0026#34;) res.redirect(\u0026#34;/\u0026#34;) return }); One important thing to note, we clear the magic code whenever we fail a login with the user. If someone tries to replay the magic code, it shouldn\u0026rsquo;t work. This also means that emails are no longer valid once they\u0026rsquo;re used, but that\u0026rsquo;s a compromise I\u0026rsquo;m happy with. We send the user an email with a link to this endpoint via sendgrid (not sponsored, just easy to use).\nThe last and final part is handling decoding of the token.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 export default function RegisterEndPoints(app: Express, db: DataBase) { // ... // check if we\u0026#39;re logged in app.use(async (req, res, next) =\u0026gt; { const path = req.path; if (path == \u0026#39;/favicon.ico\u0026#39; || path.startsWith(\u0026#39;/js/\u0026#39;) || path.startsWith(\u0026#39;/img/\u0026#39;) || path.startsWith(\u0026#39;/css/\u0026#39;) || path == \u0026#39;/login\u0026#39; || path.indexOf(\u0026#39;.\u0026#39;) != -1) { return next(); } try { //console.error(\u0026#34;Checking auth for \u0026#34;+ path); const token = (req.cookies) ? req.cookies[\u0026#39;token\u0026#39;] : req.headers.authorization?.split(\u0026#34;Bearer \u0026#34;)[1]; if (!token) throw new Error(\u0026#34;No Authorization Header\u0026#34;); await JwtVerify(token, JWT_SECRET); res.locals.token = token; const results = (JwtDecode(token) as any); // TODO: check if the user actually exists? res.locals.user = results; return next(); } catch (e) { //console.error(\u0026#34;Auth check\u0026#34;, e); } // TODO: redirect to login page if we\u0026#39;re on a page that needs it if (path.startsWith(\u0026#39;/api/\u0026#39;) || path == \u0026#39;/logout\u0026#39;) { return next(); } // redirect to login console.error(\u0026#34;Redirecting from \u0026#34; + req.path + \u0026#34; to /login\u0026#34;); return res.redirect(\u0026#39;/login\u0026#39;); }); That\u0026rsquo;s pretty much all there is to it. Perhaps in the future I\u0026rsquo;ll look into shortening the the session and doing a refresh scheme or lean more heavily into the OTP thing.\nIf you see anything that can be improved, let me know!\n","permalink":"https://matthewc.dev/projects/passwordless-auth/","summary":"I hate passwords. Not as a user as password management is basically solved with most modern browsers and password managers. What I hate is having to deal with them as a developer. Hashing, storing, authentication, etc.\nI did a small project recently using my socket.io synced vuex state and needed a system where users could easily login. I will put a huge disclaimer on this that this is just what I did for my personal project where security isn\u0026rsquo;t critical.","title":"Simple Passwordless User Authorization"},{"content":"This is the tale of how I wrote a state syncing framework based on vuex and rollback netcode . It took a few years and isn\u0026rsquo;t intended to be a \u0026ldquo;copy and paste\u0026rdquo; type of thing. I\u0026rsquo;ll be including code fragments and I\u0026rsquo;ll eventually post a cleaned up repo with only the relevant pieces. Who knows, if there\u0026rsquo;s enough interest, maybe I\u0026rsquo;ll even post a NPM package.\nTL;DR This is a long article. The short version is that I wrote a system that makes it easy to create rooms on an express.js server and have a finite state machine that is synced between all clients. The client and the server can make auditable transformations to that state with enforced checks and some hidden state. It\u0026rsquo;s based on vuex and so far I think it works awesome. As far as I can find on GitHub and Reddit, no one has done it before (at least in a satisfactory manor). There\u0026rsquo;s probably a good reason for it.\nYou would likely use a system like this if you were trying to keep shared state between a number of clients. This can apply to things like a game of Jeopardy that runs on phones, a crossword app, etc. Something where you want a significant portion of the state between clients to overlap and to have multiple sources of change.\nThe Problem First a tangent. A few years ago, I started on a project known as PadGames.\nThe idea was to take the things I liked about Jackbox games and incorporate them in a format that they could be played from anywhere. If you\u0026rsquo;ve never played Jackbox games, the idea is that there\u0026rsquo;s a laptop or desktop that serves as a \u0026ldquo;game board\u0026rdquo; of sorts and you have your phone as a client that connects as acts as a controller. You can draw, type in answers, etc. Anything that isn\u0026rsquo;t too latency sensitive generally works well. Syncing state is a difficult problem and even Jackbox struggles with this. My parents have often had a broken game state, refreshed their browser, and discovered that they\u0026rsquo;ve been kicked out of the game and cannot join back in\nIn the earliest versions of PadGames, I had simpler games such as a stock market where you could buy and sell stocks. The goal was to make the most money and prices went up and down based on what people bought and sold in the previous turn. It was largely a teaching tool for some local youth, getting them somewhat similar with the idea of the market as well as being fun. The game ran on express, vue, and socket.io. Even thought the server and the client shared a codebase, there being a tedious serialization and de-serialization layer that I had to write two or three times. Personally, the experience was painful and I eventually quit the project since the code just became spaghetti so fast and it was getting harder and harder to track down state bugs as every game was slightly different.\nLater, I worked with Luke on netgames.io . (I say worked on, but it was more of a helped out with since he started the project and did the hard work of creating a fantastic framework). His work is closed-source, so I won\u0026rsquo;t delve too much into how it worked but the point was that it provided a clean abstraction layer that you could put UI and game logic on top of.\nTo make a long story short, I wanted to create some new teaching resources for some volunteering work and wasn\u0026rsquo;t entirely satisfied with what was out there. Something jackbox games like but related to the material we were covering that day. Making my own games seemed like a good solution and I had done it a few times before. But syncing state was still an issue.\nSo the problem is this: create a way for the server and the clients to share a state machine, have it sync without any code on my part. It must be robust and resilient against network interruption and latency in a multi-peer environment. Additionally, it must handle users connecting and reconnecting.\nThe Initial Attempt I poked around the web trying to find something similar to this. I wasn\u0026rsquo;t able to find anything that quite worked or was even that similar. It needed to be fast and low-latency with low latency, so a meteor like pub-sub system seemed like it wouldn\u0026rsquo;t work as well as I hoped (the latency seemed too high, but perhaps it has gotten better over time).\nThe first attempt at this was to create a simple class that the client and server shared. Basically, it had a state object internally and methods for modifying that state that could be listened to. The approach worked at first, but reactivity was tough. I think this approach could have worked, if I had hooked it in a more focused way and added ways to provide reactivity. In the future, I might revisit this approach.\nSecond Try The solution was to use vuex on the client and the server. Thanks to Vue v3 composition API, it is way easier to run Vuex (sort of) on the server. In my testing on my laptop, a single express instance could handle 50,000 Vuex stores (though that stress test isn\u0026rsquo;t with all the clients connected, just by making toggling state). So by creating a vuex store with specific format, it could easily be synced.\nThere are a few rules for writing a store:\nAll mutations must be deterministic, actions can be random. Actions on the client side will be transmitted to the server Actions must have the source of the action included in the payload, it is replaced by the server when the request comes in, so the actual client sends null Mutations on the server as synced, any mutation with Server in the name is not synced All synced stores must implement a method known as setState which accepts the state of the object and sets the state using vuex methods so reactivity is preserved All synced stores must implement a getter that returns the hash of the current state (minus the server side information). All this boiled down to three basic pieces: client plugin, the server vuex, and the store itself.\nThe Client Plugin There are a few steps to the client plugin:\nCreate websocket Connect the websocket\u0026rsquo;s defined events to the plugin and some admin things Setup reconnect events to request a sync The plugin itself also implements something akin to rollback netcode.\nStep 1-2: Creating the websocket In my main store which keeps track of login, the jwt cookie, the current game selected, and our user information, we have an action called getLogin. I\u0026rsquo;m using vuex-smart-module to define my vuex stores, since it\u0026rsquo;s just so fantastic. But you\u0026rsquo;re welcome to adapt the ideas to whatever method you\u0026rsquo;re using. The getLogin method runs when the store is initialized.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 import { io, Socket } from \u0026#34;socket.io-client\u0026#34;; import { Store } from \u0026#34;vuex\u0026#34;; // Globals type WebSocket = Socket\u0026lt;DefaultEventsMap, DefaultEventsMap\u0026gt;; let SOCKET: null | WebSocket = null; // ... truncated ... class MainStoreActions extends Actions\u0026lt; MainStoreState, MainStoreGetters, MainStoreMutations, MainStoreActions \u0026gt; { // Called after the module is initialized $init(store: Store\u0026lt;any\u0026gt;): void { this.actions.checkLogin(); } checkLogin() { const state = this.state; if (state.loggedIn) return; // ... JWT parsing code truncated ... // Connect if (state.loggedIn \u0026amp;\u0026amp; SOCKET == null) { SOCKET = io({ auth: { token: this.state.jwtCookie, } }); const self = this; SOCKET.on(SocketEvents.SET_GAME, (item: unknown) =\u0026gt; { if (typeof (item) != \u0026#34;string\u0026#34;) return; const game = item; if (game == \u0026#39;\u0026#39; \u0026amp;\u0026amp; self.state.currentGame != \u0026#39;\u0026#39;) { console.log(\u0026#34;refreshing the page to clear state\u0026#34;, game, self.state.currentGame); window.location.reload(); } self.mutations.setGame(game); }); SOCKET.on(SocketEvents.SERVER_MUTATION, (items: any) =\u0026gt; { self.mutations.server_mutation(items); }); SOCKET.on(\u0026#39;reconnect\u0026#39;, ()=\u0026gt;{ self.actions.requestGameSync(); }) } } async requestGameSync() { this.actions.emit(SocketEvents.GAME_SYNC); } async emit(message: string | [string, any]) { if (SOCKET == null) { console.error(\u0026#34;Socket isn\u0026#39;t initialized, dropping message\u0026#34;, message); return; } if (typeof(message) == \u0026#39;string\u0026#39;) { SOCKET.emit(message); return; } const [type, items] = message; SOCKET.emit(type, items); } } There are three important things here, we listen for a SET_GAME event from the server, which reloads the page if we already had a game set. We also issue a special mutation called setGame which is leveraged later. We listen to an event called SERVER_MUTATION and then confusingly we issue a new mutation called server_mutation, this will make sense. Lastly, we tell it to request a resync packet when we reconnect. By default, the server doesn\u0026rsquo;t listen to reconnects and just responds to reconnect packages as needed. This might be revised in the future.\nRequesting a game sync packet is just emitting the GAME_SYNC packet. Emitting is an action that leverages the socket.\nNext we look at the mutations, setGame isn\u0026rsquo;t anything special, we just set the local state. server_mutation is strange as it doesn\u0026rsquo;t do anything.\n1 2 3 4 5 6 7 8 9 class MainStoreMutations extends Mutations\u0026lt;MainStoreState\u0026gt; { setGame(game: string) { this.state.currentGame = game;ß } server_mutation(data: any) { // the plugin will grab this return true; } } We pass in some data to server_mutation, but don\u0026rsquo;t use it. The reason for this is that the plugin listens to that mutation.\nIf you\u0026rsquo;re not familiar with the basics of vuex, I\u0026rsquo;d recommend brushing up . The short version is that there are four parts to a vuex module (as of time of writing): state, getters, mutations, and actions. State is the actual state of the store, which is reactive. It is readonly and can only be modified by mutations. Getters are a way to map the state to different forms in a reactive way. Mutations are methods that accept arguments and perform transformations to the state. Actions are like mutations, but they cannot directly modify the state. Additionally, they can be async so you often put http calls in here.\nStep 3: Onto the plugin itself I\u0026rsquo;ve broken it up into three sections.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 import { ActionExtraPayload, ActionPacket, ActionPayload, ActionSource, isActionExtraPayload, isActionSource, MutationPacket, SocketEvents } from \u0026#34;../../common/types\u0026#34;; import { Store } from \u0026#34;vuex\u0026#34;; import _ from \u0026#34;lodash\u0026#34;; // Mutation packet looks like this /* interface { type: string; payload: any; resultHash: number; } */ const serverMutationChain:MutationPacket[] = []; const localMutationChain:MutationPacket[] = []; let resync_requested = false; export default function clientSideSocketPlugin(store: Store\u0026lt;any\u0026gt;) { store.subscribe(mutation =\u0026gt; { // ... mutations are tracked here ... }) store.subscribeAction({ // ... actions tracked here ... }, { prepend: true }); } I\u0026rsquo;ll put a huge disclaimer here that this is not polished code, this is code pumped out at 9pm in a after-work coding frenzy. The kind of frenzy you get when you see progress being made and you keep pushing to extract whatever you can.\nSo you can see we subscribe to mutations and actions. Additionally, we keep track of two \u0026ldquo;chains\u0026rdquo;: the local and the server. The local chain is all the mutations that have occurred on the client and server is all the mutations that we\u0026rsquo;ve received from the server. These chains are reset when we set the game mode or receive a setState packet from the server as we now have a known state to start from.\nLet\u0026rsquo;s see how the we listen to mutations.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 import { ActionExtraPayload, ActionPacket, ActionPayload, ActionSource, isActionExtraPayload, isActionSource, MutationPacket, SocketEvents } from \u0026#34;../../common/types\u0026#34;; import { Store } from \u0026#34;vuex\u0026#34;; import _ from \u0026#34;lodash\u0026#34;; const serverMutationChain:MutationPacket[] = []; const localMutationChain:MutationPacket[] = []; let resync_requested = false; export default function clientSideSocketPlugin(store: Store\u0026lt;any\u0026gt;) { store.subscribe(mutation =\u0026gt; { if (mutation.type == \u0026#39;setGame\u0026#39;) { // Clear our chains when the game resets console.log(\u0026#34;Clearing mutation chains because game reset\u0026#34;); if (serverMutationChain.length == 0 \u0026amp;\u0026amp; localMutationChain.length == 0) return; const gameName = mutation.payload; if (gameName != \u0026#39;\u0026#39;) store.commit(gameName+\u0026#34;/setState\u0026#34;); serverMutationChain.splice(0, serverMutationChain.length); localMutationChain.splice(0, localMutationChain.length); return; } const currentGame = store.getters.currentGame as string; if (currentGame == null || currentGame.length == 0) return; // We know we have a game if (mutation.type == \u0026#39;server_mutation\u0026#39;) { const packet = mutation.payload as MutationPacket; if (packet.type.endsWith(\u0026#34;setState\u0026#34;)) { console.log(\u0026#34;Special set state packet\u0026#34;); // reset both sets of mutation chains resync_requested = false; serverMutationChain.splice(0, serverMutationChain.length); localMutationChain.splice(0, localMutationChain.length); } serverMutationChain.push(packet); // Step 1: Check if we need to apply this packet, look in our local mutation chain to see if we\u0026#39;ve already done it let shouldApply = false; let outOfSync = false; console.log(\u0026#34;Got server packet \u0026#34;+packet.type, serverMutationChain, localMutationChain); if (serverMutationChain.length \u0026gt; localMutationChain.length) shouldApply = true; if (!shouldApply) { // Scan ahead to see if we\u0026#39;ve already done this exact commit? const hash = store.getters[currentGame+\u0026#39;/stateHash\u0026#39;]; console.log(\u0026#34;Scan ahead to see if we\u0026#39;ve already applied this packet\u0026#34;,hash,packet.resultHash); outOfSync = hash != packet.resultHash; } if (!outOfSync \u0026amp;\u0026amp; shouldApply){ console.log(\u0026#34;server mutation\u0026#34;, mutation, JSON.stringify(packet)); store.commit(packet.type, packet.payload); // TODO: look at state hash afterwards const hash = store.getters[currentGame+\u0026#39;/stateHash\u0026#39;]; console.log(\u0026#34;StateHash\u0026#34;, hash); if (hash != packet.resultHash) { outOfSync = true; console.error(\u0026#34;Local hash = \u0026#34;+hash, packet.resultHash); } } if (outOfSync \u0026amp;\u0026amp; serverMutationChain.length != 0 \u0026amp;\u0026amp; serverMutationChain[0].type.endsWith(\u0026#34;setState\u0026#34;)) { console.log(\u0026#34;Replaying server commits\u0026#34;); localMutationChain.splice(0, serverMutationChain.length); serverMutationChain.forEach((x)=\u0026gt;{ store.commit(x.type, x.payload); }); const current_hash = store.getters[currentGame+\u0026#39;/stateHash\u0026#39;]; const last_hash = serverMutationChain[serverMutationChain.length - 1].resultHash; console.log(\u0026#34;Replayed \u0026#34;+ current_hash+ \u0026#34; =?= \u0026#34;+last_hash); outOfSync = current_hash != last_hash; } if (outOfSync \u0026amp;\u0026amp; !resync_requested) { // We\u0026#39;re out of sync, request a reset resync_requested = true; console.error(\u0026#34;We\u0026#39;ve becoming desynced\u0026#34;); store.dispatch(\u0026#39;requestGameSync\u0026#39;); } } else if (mutation.type.startsWith(currentGame)){ const hash = store.getters[currentGame+\u0026#39;/stateHash\u0026#39;]; // we should log all other mutations const packet: MutationPacket = { resultHash: _.clone(hash), type: mutation.type, payload: _.clone(mutation.payload) }; localMutationChain.push(packet); console.log(\u0026#34;local mutation\u0026#34;, packet, hash); } }) store.subscribeAction({ // ... actions tracked here ... }, { prepend: true }); } So we listen to if we\u0026rsquo;re doing a setGame mutation. If so, reset the chains and bail. If we don\u0026rsquo;t have a game currently set, we can also bail. Then we check if this is a server_mutation mutation. Remember that from earlier? The wonky method that didn\u0026rsquo;t do anything? Yeah- it\u0026rsquo;s back baby.\nNext we look at the mutation that the server wants us to apply. If it\u0026rsquo;s a setState packet, we can clear the chains and go ahead as normal. We add the mutation to our server chain, then try to figure out if we need to apply the packet. We do that by calculating the hash of the current state and looking at the hash that the server state is at. It\u0026rsquo;s important that when we write our server hash function that we don\u0026rsquo;t include server only data in our hash. We determine if we\u0026rsquo;re out of sync from the server if our hashes don\u0026rsquo;t match. This is where rollback comes in, we start from the start of our chain (the first one should be a setGame packet), applying each packet we have from the server trying to make it all work and match.\nIf we have gotten to the end and we are still out of sync from the server, we request a resync packet. This means the server sends a setState packet just to us so that we can clear our chains and get to work. There needs to be some future work to add the client mutations ontop of what we have from the server, but currently the latency on the mutation occurring on the client and coming back from the server is under 200ms (not great, but enough that the likelihood of overlapping states are pretty low in most games). This will likely get a much more comprehensive overhaul in the future. Perhaps even with some unit tests?\nOur games are namespaced into the main vuex module so any mutation that\u0026rsquo;s occurring locally on the client will be prepended with the name of the game. For example, one game is called matchup so a common mutation we see is setPlayerButton so the event type would be matchup/setPlayerButton. If we see any mutations to start with our current game, we log them to our local chain.\nOnto listening for actions.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 import { ActionExtraPayload, ActionPacket, ActionPayload, ActionSource, isActionExtraPayload, isActionSource, MutationPacket, SocketEvents } from \u0026#34;../../common/types\u0026#34;; import { Store } from \u0026#34;vuex\u0026#34;; import _ from \u0026#34;lodash\u0026#34;; const serverMutationChain:MutationPacket[] = []; const localMutationChain:MutationPacket[] = []; let resync_requested = false; export default function clientSideSocketPlugin(store: Store\u0026lt;any\u0026gt;) { store.subscribe(mutation =\u0026gt; { // ... mutations are tracked here ... }) store.subscribeAction({ after: (action, state) =\u0026gt; { if (action.type.indexOf(\u0026#34;/\u0026#34;) == -1) return; let tweaked_payload = _.cloneDeep(action.payload) as ActionPayload|null; if (isActionSource(tweaked_payload)) { tweaked_payload = null; } if (isActionExtraPayload(tweaked_payload)) { (tweaked_payload as any).source = null; } // tell the server that we\u0026#39;ve done a thing const packet:ActionPacket = { payload: tweaked_payload, type: action.type } store.dispatch(\u0026#39;emit\u0026#39;, [SocketEvents.CLIENT_ACTION, packet]); } }, { prepend: true }); } This is much simpler in comparison. After an action has been completed, we check to make sure it was namespaced (look for /). We have two types of payloads we can send to the server: ActionSource and ActionExtraPayload. ActionSource looks like this\n1 2 3 4 5 6 interface ActionSource { name: string, _id: number, socket_id: string, isAdmin: boolean, } ActionExtraPayload looks like this:\n1 2 3 interface ActionExtraPayload { source: ActionSource, } So the payload is either the source itself, or it has a member called source. Before we send it off to the server, we set these to null and then send it. This is important as on the server, we detect which type it is, and fill it in with the information from the socket JWT auth header.\nServer side Since this is already getting long, I\u0026rsquo;ll cover the server side in part 2. Since normal vuex doesn\u0026rsquo;t run on the server, I implemented a slimmed down version of it that supports most of the same features.\nThe Synced Store I\u0026rsquo;ll cover the store itself in part 3. There\u0026rsquo;s a common store that implements things like users being added and removed. All other stores extend from that store.\nWhere To Go From Here? I\u0026rsquo;m cleaning up my specific use private GitHub repo and publish a slimmed down version of this with a simple trivia game or maybe the stock market game I mentioned earlier. I think I\u0026rsquo;ll do that in part two, with some additional work working on improving the technique.\n","permalink":"https://matthewc.dev/projects/vuex-sync-p1/","summary":"This is the tale of how I wrote a state syncing framework based on vuex and rollback netcode . It took a few years and isn\u0026rsquo;t intended to be a \u0026ldquo;copy and paste\u0026rdquo; type of thing. I\u0026rsquo;ll be including code fragments and I\u0026rsquo;ll eventually post a cleaned up repo with only the relevant pieces. Who knows, if there\u0026rsquo;s enough interest, maybe I\u0026rsquo;ll even post a NPM package.\nTL;DR This is a long article.","title":"Vuex Sync Part 1"},{"content":"Welcome to my blog I\u0026rsquo;m hoping to start posting project logs here. The focus here will be:\nProgramming/Web dev Woodworking Game development Occasionally embedded/firmware type stuff Blender Maybe even baking? It\u0026rsquo;s hard to say exactly what will be here. I\u0026rsquo;ve brought over the medium articles I\u0026rsquo;ve written. I\u0026rsquo;ve included a link to the things I\u0026rsquo;ve written on Hackaday, but I don\u0026rsquo;t think I\u0026rsquo;ll bring the actual content over here.\n","permalink":"https://matthewc.dev/musings/my-first-post/","summary":"Welcome to my blog I\u0026rsquo;m hoping to start posting project logs here. The focus here will be:\nProgramming/Web dev Woodworking Game development Occasionally embedded/firmware type stuff Blender Maybe even baking? It\u0026rsquo;s hard to say exactly what will be here. I\u0026rsquo;ve brought over the medium articles I\u0026rsquo;ve written. I\u0026rsquo;ve included a link to the things I\u0026rsquo;ve written on Hackaday, but I don\u0026rsquo;t think I\u0026rsquo;ll bring the actual content over here.","title":"Welcome to my blog"},{"content":"This is an early attempt at blender from 2021. It started with a drawing that I loved. I\u0026rsquo;ve poked around Google Images but so far I haven\u0026rsquo;t found the original artist.\nI quickly modeled a few things, trying to create a more 3d look rather than sticking with the more stylized look.\nI\u0026rsquo;m not sure if this was ultimately the right choice. The moon was a little tricky to get right and I haven\u0026rsquo;t nailed the look of low light. I want it to look night time without being dark. So I still have a ways to go here. I might revisit this.\nI might even got for a non perspective projection. Get that flat look that the original artist was going for.\nMaybe even some sort of cell shaded look where it can only be a few different colors.\n","permalink":"https://matthewc.dev/blender/penguins/","summary":"This is an early attempt at blender from 2021. It started with a drawing that I loved. I\u0026rsquo;ve poked around Google Images but so far I haven\u0026rsquo;t found the original artist.\nI quickly modeled a few things, trying to create a more 3d look rather than sticking with the more stylized look.\nI\u0026rsquo;m not sure if this was ultimately the right choice. The moon was a little tricky to get right and I haven\u0026rsquo;t nailed the look of low light.","title":"PC Penguins"},{"content":"Editor\u0026rsquo;s Note: This was originally published on medium It stands as a unique and marvelous console even by today’s standards. Both from an technical as well as cultural standpoint, it was simply stupendous. This article showcases the history of the Nintendo 64 as well as explores it from a technical standpoint. If you’re like me, you likely have fond memories of this console. If you’re interested in the technical bits, skip ahead to the architecture.\nTo set the stage, the N64 was first released in Japan in 1996. When you think of the 1990’s you may think of the pocket-sized Gameboy color with Pokémon Red and Blue. But in reality, the Gameboy color didn’t come out until 1998, two years after the N64 came out.\nPhoto by Denise Jans on Unsplash The Gameboy at the time was the impressively large and heavy, Gameboy. With a grayscale screen and loaded with 4 AA batteries, it was a brick of a gaming experience. The Gameboy pocket (a slightly smaller version of the successful gameboy) had come out in some markets, but didn’t see the same widespread availability as the original Gameboy.\nPhoto by Dan Counsell on Unsplash This is to illustrate the sheer craziness that the Nintendo 64 represented, particularly in the sub $300 price point. The idea of a 64 bit console with 3D graphics just under six years after the 16 bit SNES seems reasonable in hindsight but ludicrous at the time.\nLet’s dive into the systems of itself. It has a few peculiarities. You’ve probably noticed the wonky shape of the controller(and likely if you’re reading this, you used it). It often felt that you needed three hands to really properly take advantage of the all the buttons.\nYou’ll notice it has an analog stick. This was the first commercial video game with an analog stick as its primary input mechanism. The D-Pad had been the defacto standard for decades and Nintendo personally had a lot of good technology and experience with making 8-way D-Pads.\nAn amusing story from one LucasArts developer is that only certain members of the development team were allowed to know what the controller looked like. So they had to be kept in a card box box with holes cut in so you could reach inside and handle the controller. The common joke on the developer team is that the controller was a bowl of telepathic water you stuck your hand into, but of course, you had to think in Japanese.\nIn addition to the trident like shape, the control stick had on its face sharp, raised circular ridges that if played too hard, could leave marks on your thumb or palm. We’ll discuss the industrial design and reasoning behind the controller later once we start discussing the technical architecture of the console itself. Let’s talk about the history of Nintendo as it helps understand a lot of their decisions. This section will be pretty fast.\nNintendo was founded in 1889 as a Hanafuda (花札) manufacture. Hanafuda are a type of playing card. After Japan closed all contact with the western world in 1633, the Government outlawed the playing cards that had been introduced by the Portuguese in the mid 16th century. It was a 48 card deck with four suites and looked fairly similar to the 52 card deck we have today. In response to the ban, cards became disguised, often with flowers.\nCreative Commons from Japanexperterna.se As the Government caught on, they began to outlaw the new forms of the playing cards. Card manufactures responded by further obfuscating the cards, become more and more elaborate as time went on. In fact, to this date, Nintendo still manufactures Hanafuda cards themed with various video game IP that they own. The point of this introduction is that Nintendo has a history of being an underdog, taking their time, and being very protective/secretive about what they do.\nThe turning point for Nintendo came in 1956 when they visited the USA. The world’s largest manufacture of playing cards at the time was headquartered there. The current CEO (Yamauchi) was dismayed to find the largest company in their industry headquartered in a small dingy office above a corner store. When your largest competitor in your established industry is in a tiny office, it is a good wakeup call that it might be time to expand to other markets.\nBetween 1963 and 1968, they experimented. Taxi’s, hotels, instant noodles, and vacuum cleaners were among some of the products they tried. However, despite their efforts they found they were only good at making toys. The 1964 Olympics were in Tokyo, provided a much needed economic boom. The market for toys was tight, competitive, and low margin. Electronic toys had higher margins and less competition. Nintendo had a habit of hiring talented electrical engineers to run their assembly and production lines and those engineers had a habit of creating creative solutions for problems on the line.\nSource: http://blog.beforemario.com/ taken late 1960’s of the Nintendo factory\nOne particular engineer designed an robotic arm as a sort of plaything. It was a clever design that made use of what was on hand. Hiroshi Yamauchi , the CEO of Nintendo, came through the factory in 1966 and saw the toy for what it was. They asked him to design it in full, which became the Ultra Hand and was a huge success. The engineer, Gunpei Yokoi , went on to design the Game \u0026amp; Watch Series and supervise Donkey Kong, Mario Bros, Metroid, the Virtual Boy, among others. It was Yokoi who said:\n“The Nintendo way of adapting technology is not to look for the state of the art but to utilize mature technology that can be mass-produced cheaply.”\nAnother one of their their first real hits was the Nintendo Beam Gun, a duck hunt like game. Keep in mind, Pong wasn’t even on the market yet. Nintendo bought up old bowling lanes and made indoor shooting galleries with their light guns. This proved to be expensive to maintain as it required space and staff so they decided to focus on home consoles and arcades rather than running their own spaces. The popular Mr. Game \u0026amp; Watch was released in 1981.\nThe video game market in the USA crashed in 1983. While the exact cause is somewhat of a mystery, Nintendo largely credited it to a proliferation of sub-par quality games that eroded consumer trust. Negotiations with Atari to redistribute their home console, the Famicon (or the NES as it would later be known) had fallen apart, Nintendo wasn’t a player in the US market. This left just Sega (another Japanese company) and Nintendo as large players in the video game industry. Nintendo decided they would not repeat the mistake of Atari and other US based companies and focus on each game they released having a seal of quality and up to their exacting standards. This trend continued until the later years of the Nintendo switch, where the bar for entry was lowered somewhat.\nThe Nintendo 64 Now let’s take about the N64 itself. One notable feature about it is that the N64 was going to have a disk drive attachment (known as the N64DD). The project was started back in the SNES days as Nintendo partnered with another company to develop the disk drive, Sony. Fairly late in the project, Nintendo pulled out for unknown reasons. Sony, understandably huffed, decided to continue the project on their own, ultimately creating the PlayStation. Nintendo also wanted to call it the Ultra 64, which you might see in chip names (NUS or Nintendo Ultra Sixty-four). Konami owned the copy right of several ultra-like games (Ultra Football, Ultra Tennis, etc). Thinking through the ramifications, they rebranded to N64.\nLeading up the release of the N64, Nintendo really went on the hype circuit. At the time, a company known as Silicon Graphics Inc (SGI), was known as a graphical technical powerhouse. For eight years (1995–2002) all the films nominated for an academy award for visual effects had their effects created on SGI systems. You might think of them as the NVIDIA of their day.\nA SGI Onyx system, used for N64 development, retailed for around $100,000–250,000 in early 1995\nWhen Nintendo was marketing what the full power of the an SGI system in a home console form factor with a home console price. This wasn’t helped by the fact that the demos that Nintendo showed off were rendered on the incredibly expensive Onyx server-class systems. We’ve gotten used to incredible amounts of computing power being crammed into every smaller spaces thanks to smartphones and the cloud, but to put it in perspective, this would be like Microsoft hinting that the next Xbox would have the same power as an entire Azure rack.\nThe Onyx systems pictured above were actually often what was used for N64 development. In fact, one game studio told a rather funny story a few years later at a gaming convention about getting a call from the FBI asking why they were buying several military class super computers. Typically, this system would be used for developing 3d models, re-topologizing them, building the code, and because the architecture was similar enough, even run N64 simulations.\nThe Architecture Author’s note: This represented research from many sources on my part and there may be inaccuracies below. Feel free to drop a note with a correction and a source and I’ll fix it with a note making sure to mention you.\nImage from Rodrigo Copetti\nA big thanks to Rodrigo Copetti who wrote an excellent blogpost about many of the things listed here. From here on out, it gets quite technical. Above is the main motherboard with the parts annotated for your benefit. Some of the most interesting pieces are the PIF, the Reality Co-processor and the NEC VR4300 CPU. The design of the N64 is largely from SGI, who according to one rumor, originally offered it to SEGA, who turned it down. Nintendo picked up the design and had a few different companies manufactured the chips. NEC for example, manufactured the chip on a special 35 µm process, a cost-reduced derivative of the more expensive MIPS R4200.\nThe board schematic of the N64\nAn noted on the diagram above, there’s an extra RAM slot and on the console itself, there is a door to access the slot. The console came with a small connector block that terminates the ram connections. The RAM is chained together so if the block is removed, the system fails to enumerate all the RAM and you end up with a blank screen as the system waits for RAM that isn’t there. This is due to the rather constrained boot environment that we’ll get to later, but there simply isn’t enough hardware to get past this.\nan architectural layout of the Nintendo 64 (thanks to Rodrigo Copetti)\nNEC VR4300 The main CPU of the Nintendo 64 is a 97.75MHz MIPS III ISA CPU and was the largest volume of MIPS ISA based chips in the 1990’s (followed by the Playstation). It was a five-stage pipeline with a 64-bit floating point unit as a coprocessor but since it’s on the main data path inside the ALU it can stall the integer pipeline, so it works more like the floating point units in modern processors. It had an internal 64-bit bus but only a 32-bit system bus. Most N64 games used 32 bit instructions to conserve space as space on cartridges was expensive and 32 bits was accurate enough for most operations. It had a 24kb L1 cache, split 16kb for instructions and 8kb data.\nEven though the N64 had a UMA (universal memory architecture), the CPU didn’t have direct access to the memory. Instead the RCP (reality co-processor) handled all the memory arbitration. The N64 used RDRAM, a cheaper type of ram that offered similar performance to the more expensive DRAM.\nI mentioned that the CPU was a five stage pipeline. For those unfamiliar with processor design, here’s a quick overview. For those who are well-versed in these things, feel free to skip to the section about Nintendo’s custom ASIC, the RCP.\nFive Stage Pipelines A five stage pipeline looks somewhat like this. This isn’t the actual stages for MIPS but it helps illustrate the point. There are stages, breaking up the steps involved in computing an instruction into smaller pieces.\nYou do this for two reasons, speed and concurrency. By breaking it up into chunks, each part has a shorter “critical path” which is a fancy term for the minimum amount of time it takes for all the gates to switch to a stable state. You also can execute multiple instructions at the same time. You can be fetching the operands for an add instructions while decoding a multiply instruction at the same time. The penalty you pay for these massive boosts in productivity is latency and complexity. There was a time where Intel chips went for longer and longer pipelines (in the NetBurst era, pipelines reached a staggering 31 stages). An excellent paper about optimal pipeline depths can be found here .\nBut adding more stages often allows for higher clock speeds, since you shorten the critical path. We’ve largely backed away from this as an industry (a modern Intel processor is 14 stages), which is partly why you’ll notice processor speeds haven’t changed that much in the past twentyish years (you could also make an argument that clock speeds haven’t gone up due to thermal constraints and have a very valid argument).\nAs a further explanation of the critical path, let’s talk about the two types of logic in digital design: combinatory and sequential. Combinatory has no notion of the clock. You put inputs and at some point later, the output comes out. Below is the picture of a full adder circuit.\nIt adds two one bit numbers together along with a carry in. It will output a two bit number (carry out being the higher order bit). If you put in A= 1, B = 0, C= 1; you’ll get Sum = 0 and Carry out = 1 because 1+0+1 = 2 or b10. This is what I meant by the critical path. In this particular circuit, the critical path is from A and B to the Carry out. This is what will take the longest to come to a stable state. It passes through the most gates (though some silicon processes optimize the number of transistors and the switching speed of different gates so the number of gates is not always the determining factor, but it is a good rule of thumb).\nthe critical path (in red)\nHere’s where things get tricky. A full adder capable of adding multiple bits together is chained together with the carry out going into the carry out of the next stage. As you can imagine, the result sort of propagates through the adder eventually some amount of nanoseconds later, popping out the other side. This is why you can only clock a computer so fast as at some point, the results start being wrong if you look at them too early.\nA chained adder, thanks to W. C. Lin and H. Rattanasonti That brings us to the next type of logic, sequential. Sequential logic is just combinatory logic with flip flops (logic that can store values, think of it like a gate that can hold one value and every time the clock ticks it saves whatever it is currently reading and starts outputting that). The diagram below illustrates the pieces of a typical pipeline (this is not the exact MIPS III pipeline but close). You can see the flipflops are the large vertical green bars that store state in-between pipeline stages.\nThere’s instruction fetching, where instructions (thanks to RISC they’re all the same size, roughly) are read from memory and then passed into the next stage. Instruction decoding is where a giant lookup table processes what to do. Think of it like a giant block of if’s in a programming language but all the if’s are evaluated in parallel. The arguments are also fetched from the registers. The execute phase is where the operations happen. This is also where the floating point co-processor lived. Adds, multiplies, and branches are evaluated and then sent onto Memory Access, which varies from processor to processor but you can think of it like anything that needs to use memory does so here. This means one cycle and two cycle instructions can both terminate at the same time so the complexity goes down a bit. Write back is just the results being put back into the registers. A great resource is the Wikipedia article on the classical RISC pipeline.\nOne more interesting note about the Nintendo 64 CPU specifically is that it has delay slots. As you can imagine, if you get a jump or a branch instruction, you don’t know if you’ll branch or where you’ll jump until a few stages in. A more modern processor has branch prediction and lots of invalidation techniques for the speculative execution. It’ll predict if the branch will be taken or where the jump will be. If it predicts wrong, it can invalidate the work that it did, stall the pipeline, and start fresh. The five stage pipeline is short and we know within a cycle where we’re going to jump to or the branch we’ll take. This does mean that we have an extra cycle where we don’t know yet. So on the N64, every instruction after a branch or jump is executed. This technique is known as a delay slot. For example,\njal dest\naddi t0,t0,1\nThe add immediate will execute after the jump. The N64 assembler (as many MIPS assemblers) have an option to insert NOP’s after each branch and jump and link. It also offers the opportunity to use a cycle that would otherwise be wasted.\nThere’s more that could be said about processor design and pipelines such as data hazards and loop back lines, but perhaps I’ll cover that in a future article.\nAs mentioned before, the CPU doesn’t have direct access to memory and can’t do DMA. It didn’t have memory pre-fetch so this was the biggest performance bottleneck of the N64. Later analysis by Nintendo showed that 50% of the console’s idle time was simply waiting for memory. There was DMA on the unit but that was controlled by the RCP and was difficult for the CPU to manage.\nRCP — Reality Co-processor The CPU and RCP of the NUS (Nintendo Ultra SixtyFour)\nThis is the custom ASIC that has a lot of SGI magic in it as well as all the glue logic that Nintendo would normally put in discrete chips. The RCP has three parts: RSP (reality signal processor), and the RDP (reality display processor), and the RAM controller. The RCP has a 9 bit bus to memory and can address and extra half megabyte of RAM that the CPU can’t address.\nThe RCP was the secret sauce of the N64. It had a whole separate processor with tons of specialized circuitry. This was a clear advantage over the competition as it allowed developers to offload work onto it. However, it could also be tricky. Do you do audio on the CPU or the RCP? You can adjust the graphics on the fly and put new microcode on it if you really desired (I don’t think any game did this). How do you handle that?\nModern GPU’s have actual graphical processors. You can pass them points and they’ll apply matrix operations and transformations. Most of the consoles of the N64’s generation were just simple raster systems. You passed them triangle data and they rendered it on the screen with no transformations. You had to do all the perspective calculations, culling, and z-depth ordering on your CPU, eating into your precious cycles. So there was a tradeoff between framerate and the number of things on the screen at one time.\nIn terms of function, the RDP did per pixel operations and the RSP did vertex and geometry calculations. You can think of it like vertex and pixel shaders. Sort of.\nIt ran at 62.5 MHz and had logic for talking to game cartridges, driving timing across the console, and included interface for audio, serial, video, and other interfaces. Most, if not all, I/O of the N64 is routed through here. Below is a annotated decapped chip.\nYou can see on the left is the signal processor. It’s laid out much more like a CPU as it’s got lots of discrete sections. It’s got a huge vector unit up on the top left and lots of logic in the core. The RSP was actually another MIPS CPU with extra 8 bit opcodes for the vector operations. Additionally, the RSP could do audio like MP3 decoding, midi processing, or wave table look up.\nThe RDP is less CPU like and more focused on processing. It’s not laid out like a CPU. It’s laid out more like a GPU, though modern GPUs are more like thousands of tiny processors that share state. The RDP was much more focused on rasterizing than on rendering.\nYou can see here there’s separate memory for data (DMEM) and instructions (IMEM). It’s mapped into the same address space with instructions. The cache is actually memory mapped to 0xA4001000 and 0xA40000000. Registers are also memory mapped mapped starting at 0xA4040000.\nThe RCP has an internal bus known as the XBUS that allows the RDP and the RSP to communicate. The microcode in the RSP defines the method for transferring data to the RDP and three methods were supported: FIFO, XBUS, and DRAM. FIFO was a queue in RDRAM that the RDP could read from that the RSP writes into. XBUS uses the bus inside the RCP and just directly passes messages from the RDP using an internal queue inside the RDP. DRAM uses the RDRAM exclusively and allows the CPU to move commands over to RDP ram.\nRSP — Reality Signal Processor The RSP is actually a complete separate RISC processor and has an 8 way, 16 bit vector processing processor. The RSP handled geometric transforms, clipping, culling, lighting calculations, and occasionally audio. Since it’s a bespoke microcontroller you need to boot it up. Inside the N64 SDK, five different microcodes were offered with different levels of functionality.\ngspFast3D — most full features, including shading and fog (used in Mario 64) gspF3DNon — same as Fast3D but without near-clipping gspLine2D — does not render triangles so it gives a wireframe effect gspSprite2D — efficient for 2D sprite images gspTurbo3D — faster than Fast3D but reduced precision These microcodes were written by Yoshitaka Yasumoto, a developer for Nintendo. Several other microcodes were developed over the course of the N64’s lifetime but due to SGI’s earlier experience with releasing developer tools for their proprietary tech, SGI was very reluctant to release any sort of debugger or documentation for the RCP. It was eventually reverse engineered and a few games, but the vast majority used one of the default microcodes. Indiana Jones and the Infernal machine, Star Wars: Rouge Squadron, and Star Wars: Battle for Naboo were all games that used a custom microcode to push the console hard and output at 640x480 rather than the much more common 320x240. All three of those games were produced by the same game studio, Factor 5. Over the course of the N64’s lifetime, several more microcodes were released such as Fast3DEX (Mario Kart 64), Fast3DEX2, and Fast3DZEX (Zelda extended).\nThe main processor communicated with the RSP by putting 64 bit words into the shared memory space and the RSP read them in and executed them according to the microcode loaded. You can kind of think of it sort of like OpenGL , where there are operations you can call. Perspective projection, clipping, and lighting are just some of the things.\nRSPBOOT was a short piece of code that was used to boot the RSP. It was 208 bytes by default and loaded the microcode. It was loaded into IMEM but the bootstrapping process for the N64 so it needs to fit in 4KB (1,000 instructions). This means you need to set initial registers and get things in a state where things can run and load in the next section of microcode within 1,000 instructions. A typical N64 game project includes this:\ninclude “codesegment.o”\ninclude “$(ROOT)/usr/lib/PR/rspboot.o”\nWhich just includes a bit of data to specify a data segment and then the boot blob. The boot flow of the main processor typically looks like this:\nInitialize N64 CPU CP0 registers Initialize the RCP (such as halt RSP, reset PI, blank video, stop audio). This is where rspboot.o is loaded into the RCP. Initialize RDRAM and CPU caches Load 1 MB of game from ROM to RDRAM at physical address 0x00000400 Clear RCP status Jump to game code Execute game preamble code (which is similar to crt0.o and is linked to game during makerom process) which clears the BSS for boot segment (as defined in the spec file), sets up boot segment stack pointer, and jumps to the boot entry routine Boot entry routine should call osInitialize() RDP — Reality Display Processor The RDP is, at it’s core, a rasterizer. It receives commands via the XBUS or the RBUS, which means both the main processor and the RSP can issue commands. The RDP doesn’t technically include the IO interfaces, but it’s connected to the internal XBUS so the RSP can talk to the audio and video interfaces.\nInteresting enough, the RDP could handle triangles or rectangles and had 9 bit bytes (thanks to that extra bit to address RAM), which it used to store depth. It used the IO interface to use DMA to write the output video image to a section of memory that the video encoder could see. That was then sent out over the VBUS to a DAC. The Audio worked similarly.\nYou’ll notice that the texture memory unit is only 4 KB. That’s tiny even by the standards when the N64 was made and this is by far the largest technical challenge of the N64. To do mipmapping (where you have two of each texture, one for up close and the other for far away), you effectively have 2KB of texture. For context, the image below (at 798 by 599px) in a compressed PNG size is 609.9kb. 4Kb also means about a 37 by 37 px image in 8 bit per color BMP without compression.\nFor context that’s about this size:\n37 by 37 image\nIt’s funny reading articles from the early 2000’s about development of games for the N64 (for example Indiana Jones and the Infernal Machine). The developers mention the 4k textures and you start to wonder how they’re fitting 4k textures but then you realize, no that’s KB not 4096x2160. Many games had to get ridiculously clever with texture reuse. Many games such as the Mario 64 elected to use simple color Gouraud shaders rather than textures. Since the N64 was fill rate limited instead of geometry limited, many games also elected to represent some items as sprites instead of full geometry. You’ll notice in the scene below that the red bomb-bomb characters are actually just rectangles.\nThe most the N64 could output was 24 bit color 640 by 480 but generally games chose a more conservative 320x240 as this conserved resources.\nAudio As you saw on the original motherboard diagram, there is an audio DAC. How does a game get the data from the cartridge all the way to the DAC and out through the outputs in the back of the console. You simply encode them as discrete voltages, but remember Nyquist theorem means you need a sample rate of 2x the highest frequency. So having a 16 bit sample for two channels means 32 bits per sample at 14khz per second. 60 seconds of audio is 3MB, which is often the whole size of your cartridge. A CD could easily hold 600MB which means you don’t need to encode or compress it. Just play it. Eventually cartridges got up to 64mb but it took a few years.\nThis made it tricky to port games over to the N64 from the PS1 since the music often would need serious compression or to be stripped all together. Unlike it’s predecessor and other consoles of earlier generations, the N64 doesn’t have a dedicated audio chip. The SNES and NES have dedicated chips that could be configured (to produce music known as chiptunes). For the N64 you can choose to play on the main CPU or the RSP (with the right microcode loaded). In fact, this is rumored to be the reason a few popular PS1 games remained exclusive. Because the directors weren’t happy with having to strip the beautiful music they created.\nWith the size constraints The RSP can play ADPCM (voltage samples) or MIDI data. Many games opted to create their own MIDI synth with custom samples. Other games generated the music at runtime.\nSomewhat mysteriously, it is music that is largely credited with the Nintendo 64’s place behind PlayStation. Many incredible games such as Metal Gear Solid reportedly were unwilling to port their games to Nintendo 64 as the resulting compression in music rendered it unlistenable and it would need to be cut. Some PlayStation games leveraged the fact that you could swap disks while playing and continue the game. Doing this for the N64 would have been prohibitively expensive as one cartridge was expensive enough.\nCIC and PIF There are two more important chips and only one of them is on the N64. The PIF is the Peripheral Interface Bus. This talks to the CIC (checking integrated chip) on the cartridge and acts as the largest source of security for the N64. It has a 2KB IPL (initial program loader) which talks to the cartridge and does region-lock and anti-piracy verification. It then loads the next IPL from the cartridge and this is how the main CPU is booted, when the bootstraps the RSP. In the cartridge below you can see the main cartridge memory chip on the right and the CIC on the right.\nAn excellent talk on reverse engineering the CIC was done by Mike Ryan, marshallh, and John McMaster and it can be viewed here . The NES and the SNES both included CIC chips, though much simpler than the one used in the N64 as each generation Nintendo improved the design. In fact, the N64 CIC wasn’t cracked until 2016. Below is the decapped chip with the PIF on the right and the CIC on the left. You’ll notice they both share a similar hashing SM5 cores (in the blue box) which allows them to compare hashes and make sure they match as they communicated over a SPI like bus.\nThere are ten different versions of the CIC, five for PAL and five for NTSC.\nSDK Like the Xbox 360 and several other consoles of the era the N64 did have a rudimentary operating system. But rather than being a system that loaded the game, the operating system was built into the game code via the SDK.\nThe main CPU had thread units and the RCP had tasks (since it was structured much more like a RTOS). There’s plenty of information in an unofficial website that lays out a lot of the Nintendo documentation that can be found here .\nEmulation Now you might be asking “wow, this all sounds really tricky to emulate.” And yes, you’re right. It is tricky. So most N64 emulators cheat. They hash the RSPBOOT block and the accompanying microcode and match it against a native C library that will process the RSP commands from the main CPU. This does mean that if there’s an obscure game out there that had it’s own custom microcode, the emulator might not work. I believe it just loads a best guess microcode and hopes for the best, but that varies emulator by emulator.\nIt also doesn’t have to be cycle accurate. Thanks to the extra size of the 64 bit instructions, they were used sparingly, if at all, so it makes it easier to emulate on 32 bit systems (which aren’t as common these days).\nThere are some emulators that focus on cycle accuracy, but I think that is largely talking about cycle accuracy of the main CPU rather than perfectly recreating the RCP. Granted that also involves making sure the timing on the RCP is correct, which is a daunting task to say the least.\nThe Technical Impact of the N64 The impact of the N64 can still be felt today. The designers of the RCP system later went on to form ATI, which was bought by AMD. They designed the GPU inside the Gamecube as well as some of the Radeon graphics cards.\nThe Gamecube swaps out the SGI internals for IBM PowerPC but it kept a lot of similarities in the graphics system and memory layout. They tried to address the memory issues in the N64 by using DDR ram (which came out in 1998, two years after the N64) as well as giving the GPU it’s own memory space.\nThe Legacy of the Nintendo 64 The legacy and impact of the Nintendo 64 continues to this date. People like myself write these articles. People like you read them. I personally have fond memories of the Nintendo 64 as it was the first video game console allowed in the house growing up. I had played NES and SNES at friend’s houses, but my mom had a fairly strict no video games policy. However, my dad won a Nintendo 64 at a technical conference as a prize and she couldn’t quite bring herself to say no.\nInterestingly enough, many consider the Nintendo 64 to be the era where Nintendo lost its foothold in the market. The risks taken by Nintendo like the analog stick, the push for 3d, and the relentless focus on quality over quantity; could have left it as another piece of plastic. Instead they all resulted in a lasting legacy even though the PlayStation sold more than double the number of units (largely a function of the length of time PlayStation was on the market, but even at its peak, the Nintendo 64 was selling less per week than the PlayStation).\nGoldenEye 007 is credited as the first successful first-person shooter on console. When Sony introduced their dual-shock controller for PlayStation just a year and four days later, the innovation can be credited to Nintendo and Sony just wondering what would happen if you had two of them since you had two thumbs.\nSuper Mario 64 laid the groundwork for how 3D characters move and interact with the world around them. Gabe Newell, Cliff Bleszinski, and many other developers have credited Super Mario 64 to being a huge influence on them as creators.\nFire up that emulator or dust off the console and let the nostalgia wash over you. Or perhaps try and see past the old graphics and marvel how far we’ve come in just 20 years.\nMore Resources: http://n64.icequake.net/mirror/www.jimb.de/Projects/N64TEK.htm http://n64devkit.square7.ch/n64man/ https://www.copetti.org/projects/consoles/nintendo-64/ http://n64devkit.square7.ch/ https://www.retroreversing.com/n64-hardware-architecture/ https://en.wikipedia.org/wiki/Nintendo\\_64\\_technical\\_specifications http://n64.icequake.net/doc/n64intro/kantan/step1/index.html ","permalink":"https://matthewc.dev/musings/nintendo-64/","summary":"Editor\u0026rsquo;s Note: This was originally published on medium It stands as a unique and marvelous console even by today’s standards. Both from an technical as well as cultural standpoint, it was simply stupendous. This article showcases the history of the Nintendo 64 as well as explores it from a technical standpoint. If you’re like me, you likely have fond memories of this console. If you’re interested in the technical bits, skip ahead to the architecture.","title":"Nintendo 64: Architecture and History"},{"content":"Editors Note: Originally posted on medium by me\nAt least, the way things are going here in the USA. Our roads, as we currently handle them, are unsustainable. As city and states are financially stretched in different and difficult ways, the incredible cost of road maintenance and construction is getting harder and harder for policy makers and government workers to manage. At the rate we are going there will be significantly fewer roads, more traffic, higher taxes, and more pot holes than either you or I could imagine in our worse nightmares.\nWe as a country have focused so much on expansion and didn’t pause to consider how we would maintain the roads that we built. Cities, counties, and states are going to go further into a hole of debt that they run the risk of going so deep they can’t tax their way out of it.\nI talked to a local city council member, Michelle, since her comments and actions inspired me to write this.\n“You spend every nickel you have on infrastructure and it still decays faster than you can pay to fix it. Eventually the debt load outstrips the ability of the tax base to provide. 100% of revenue goes to service debt of the already deteriorated roads. As a city, you can’t fix the ones you have. You also can’t build more. It’s a hole so deep you can’t tax your way out. We’ll need a fundamental shift in how we live and do transportation in the US to avoid it. And I don’t think we will be able to avoid it because no one will be able to agree it’s broken.” — Michelle, City Councilor\nHow did things get this way? It starts with how roads were built in America and why. It wasn’t the car but the popularity of the bicycle that started the push for smooth, flat roads in the early 20th century. Cities centers enjoyed cobblestone, railroads, and trolleys while rural areas were dirt, mud, gravel, or dust depending on the season. Cities at the timer were smaller, denser and there were no suburbs. So rural might mean just a mile outside of the city center.\nThe common practice of the USA at the time was that the majority of rural roads would be paid for, built, and maintained by the local landowners that the roads were on, not the city or state. A common type of road was the macadam road, which was built by hand-chiseling various layers of rocks to specific sizes and spreading them at precise grades. As you can imagine, this was tedious and expensive and simply out of reach for most landowners. Laying tar to seal the top layer of the macadam to give a smoother surface (known as pitch macadam) was fairly uncommon, as it was added cost and the existing macadam roads worked just fine for horse drawn carriages and pedestrians.\nConstruction of a macadam road, “Boonsborough Turnpike Road” between Hagerstown and Boonsboro, Maryland, 1823. Inspired by the work of John Loudon McAdam . (Painting by Carl Rakeman — This image is in the Public Domain)\nA newly popular invention, the bicycle, rose into the public eye in the 1890s causing those who owned a shiny, expensive bicycle to push for places to ride them that didn’t hurt your butt. As evident in the picture below, there was very little padding on the seat. Given that most rural roads were gravel or dirt, only those in the city were able to enjoy bicycles as a means of transportation since the cities had paved roads. What started as a local agitation of the League of the American Bicyclists for better rural roads turned into a national political movement known as the Good Roads Movement that lasted from the 1870s to the 1920s. In 1893, just 28 years after the end of civil war, the Office Of Road Inquiry (ORI) was established to work towards new rural road development. Particularly, they cited the practices in Europe that cities and counties maintained, built, and paid for the roads. Their budget and the road technology of the time restricted their progress and scope through the 1890s and early 1900s.\nPhoto by Haut Risque on Unsplash As you may know, Henry Ford began to sell the Model T Ford in 1908. What was crucial about the Model T is both the mass-production techniques that enabled a low-cost as well as the ethos that Ford himself spread that everyone who worked at the factory should be able to afford the cars that they built. He did this by increasing the wages he paid as well as focusing on cutting costs. This was the start of the American Car Dream(tm). The idea that every American not only needs but deserves a car. That there is no freedom like the open road and other slogans that clever marketing would ingrain in our culture over the next hundred plus years.\nAs more Americans had access to cars, rural votes began to lobby for paved roads. World War I put the construction of roads on the backburner, but once it was over, the Federal Highway Act of 1921 renamed the ORI into the Bureau of Public Roads (BPR) and gave them the funding needed to pave two-lane interstate highways. The actual construction was done by state highway agencies, who by the 1930s were looking for jobs for their unemployed citizens. It was the USA’s entry into World War II and the real possibility of fighting within the countries borders loomed that the military started to consider how they would move troops, tanks, and equipment from one side of the country to the other. The focus of road maintenance and construction was solely on the roads that would benefit the military if such a battle were to occur. Naturally, many roads were neglected and fell into disrepair. FDR signed some legislation that sought to revitalize these neglected roads in 1944, but it didn’t get the funding it needed until 1956 with the Federal-Aid Highway Act signed by Eisenhower.\nWith the construction of new roads, there began to be concerns about how the sheer quantity of new roads affected the environment, the layout of cities, and the usage of mass transit. The US Department of Transportation was established in 1966 and the BPR was renamed to be the Federal Highway Administration (FHWA). Over the next few decades, the interstate system welcomed 42,800 miles to its system.\nAt the same time, the dream of suburbia began to take root. The economic boom and the return of many from overseas caused a dramatic reorientation of the American ideal home. Before World War II, Americans had moved into the cities to find jobs and housing. In the post war years, the suburban areas around cities saw dramatic growth which city centers shrunk. By 1960, almost as many Americas lived in suburban areas as in city center and this trend has continued since. This was in part by the housing crunch as well as the GI bill that allowed young veterans to buy homes with no money down. The housing crunch of the 1940s and ‘50s had several factors such as the fact that virtual no new housing had been built during WWII, the GI bill that allowed returning veterans to buy homes with government guaranteed loans and zero money down, and the accompanying baby boom.\nIn 1940, only two in every five Americans owned their own homes. By 1950, it was more than half. By the mid 1960s it was over two thirds of Americans and that rate has roughly held to present day. Some historians say that we will likely never see that same kind of pent-up housing demand that existed in the late 1940s.\nPhoto by Jean-Philippe Delberghe on Unsplash The market in the 1950s and ‘60s responded by the application of mass production techniques to home construction by Arthur Levitt. Other developers followed Levitt’s lead, buying up cheap land outside the city and building houses quickly and cheaply. There are accounts of hundreds of acres outside Lakewood, California just 15 minutes south of LA with cement trucks waiting in a line over a mile long to pour foundations for mass produced homes. These were not apartments, complex, or duplexes. The houses that developers were putting in were detached single-family houses, as they were easy to sell and cheaper on account of the assembly line techniques perfected for building airplanes in war time. In fact, their homogenous nature lead to many criticism of the suburb lifestyle including the song “ Little Boxes ” by Pete Seeger. This article does not seek to address the large racial injustice done towards people of color and their families when it came to moving to these affordable homes, as most of these housing developments and many government loan programs were almost exclusively for white families. However I encourage you to learn more about it and look into what can be done today to help better integrate our society and support underprivileged or marginalized communities.\nIn just three years, the empty farm land south of Los Angeles, California was transformed into a city of 90,000 people. With this glut of housing came roads. You need cul-de-sacs. You need access roads. You need large roads to get into the city from the suburbs. Traffic lights and stop signs. Interchanges and highways with on-ramps and off-ramps. So developers built the roads for their neighborhoods and in some cases, cities paid for these roads to incentivize the developers to build within their city bounds. We as Americans expect to be able to drive somewhere and get to anywhere we want via our car. Can you imagine moving somewhere where you car couldn’t take you to certain parts of town? Or not being able to get to certain cities? So of course developers and cities made sure new neighborhoods had a way of getting around, connecting it to the state’s ever-growing network of roads.\nSome folks say that America is addicted to cars and this is largely true. We need it like an addict because we can’t function without it. We as a people love our cars and we love the open road.\nBut this doesn’t answer the question of who pays for the road.\nWho pays for the roads? We’ve seen the cost and burden of roads shift from private land owners to cities and states. In response to this, a gas tax was introduced in 1932 to maintain and build roads and highways. There is the federal gas tax (18.4 cents per gallon at time of writing) as well as a state gas tax (in Washington, it is 37.5 cents per gallon at the time of writing). How the money gets distributed is a bit murky and changes based on policy, but in general Federal funds are used for interstate and state funds are used for state roads. In 2007, Mary Peters, then US Secretary of Transport, stated that 60% of federal gas tax funds were used for highways and bridges. The rest went towards other transportation projects.\nAs Michelle states, cities and counties struggle to pay for the roads they have and often cover it by building more roads.\n“Basically new development on the edge doesn’t actually cover its true costs. The tax revenue it generates is less than the cost of the services and infrastructure it consumes. But everyone assumes all new growth is good growth. Existing roads are primarily maintained using development fees from new growth. That collapses when there is no more room in a given city for new growth. Also remember that the same road issue also applies to water and sewer systems as well.”-Michelle\nFor cities needing funds for roads there are programs available, but the funding available is often smaller than needed and all the cities in a region are competing for the same small slice of funding. Often a road will be 20% federal money, 50–80% city funds, and the rest being bonds or other forms of debt. As with many things political, the funds are distributed based on political needs rather than physical need.\nEven with the gas tax, it isn’t enough. According to the Department of Energy, 142.17 billion gallons of finished motor gasoline were consumed in 2019. With an estimated national population of 327,167,434 as of 2018, that comes out to roughly 434 gallons per person. I realize this is a rough estimate and doesn’t factor in actual consumption since that number is total population and includes those who do not drive. Based on current taxes rates, this means a town of 90,000 people generates $14,647,500 in state taxes and $7,187,040 in federal taxes. At $14 million and $7 million for a combined 21 million dollars, you might be saying, things are looking pretty good. That’s a lot of road you could buy with that. Not so fast.\nPhoto by Finn Gerkens on Unsplash If you were to use that city’s gas taxes at the federal and state level exclusively on just that city you would be able to build between one and fourteen miles of road. For most of us, 14 miles is a trip is just part of a trip to get groceries, drop off a package at the post office, pop by the pharmacy, and maybe stop somewhere for a light snack. A city of 90,000 people has much more road than just 14 miles. Flagler county in Florida has about 97,000 residents (pick because it’s about 90k residents) and over 900 miles of local road (this does not count state roads or highways) as of 2018 . That’s over 50 feet of road per person. In other counties across Florida it can go as high as 148 feet per person (Hamilton), with the general trend that more densely populated counties have less roads per person.\nWhy are roads so expensive? The cost of a new road ranges from a million dollars a mile for a simple two lane road to 21+ million for a six lane road. While these numbers are based on estimates done by the Department of Transportation in Florida, and they’re older, it goes to show that roads are expensive. And they always have been. Think back to the early macadam roads where the stones were hand chiseled. It was tedious labor that was done by a half dozen men sitting breaking rocks for several days while other workers rakes the various layers together. You might think that the cost in the labor or the equipment, after all, there’s usually several people standing around watching one person work right? Labor and equipment cost just an estimated 0.0023% of your typical road. The materials are what are expensive. “That doesn’t make sense” you say. “It’s just rock and gravel and tar and stuff right?” and you’d be right. But it is a lot of rock and gravel. The average concrete road has around a foot of steel enforced concrete with up to a few feet of based aggregate for drainage and support. One source puts steel reinforced concrete at around 2500kg/cubic meter (156 lbs/cubic foot) so an 2 lane rural road with 5 foot shoulders (a total of 34 feet) is about 5304 lbs a foot. So a mile weights 2,800,5120 lbs or 14,002 tons. That’s just the top wearing layer. Depending on the type of underlayment, that weigh can be double, tripled, or more.\nAs you can imagine, material of that quantity is expensive even if it is just fractions of pennies per pound. Plus hauling that much material to a job site isn’t cheap. For asphalt based roads, the numbers are similar, though asphalt roads are slightly cheaper to construct though they tend to wear faster in certain climates in addition to needing to be sealed often.\nLet’s talk maintenance. Cost estimates are hard to find but there are relatively few things that can be done to fix roads. Asphalt requires sealant. Cracks and potholes can be patched. You can grade a road by removing the top few inches of the wearing surface and repaving the top layer. Much of what we would put under road maintenance is just ripping up small sections of the road and redoing it completely.\nContinuing our story on the history of America’s road, we get to the housing related recession of 2007. In addition to hitting hard on many of America’s families, it also hit city councils quite hard. As most funds for cities and counties come from sales tax, vehicle taxes, and property taxes, it can be hard to keep positive cash flow when people are being foreclosed on and all the consumption based taxes were down. As a result, many of the cities and states across the nation elected to hold off on road construction, letting roads deteriorate. In fact several new reports suggest that the average conditions of our roads are the worst they’ve been in years.\nIn addition, while the rise of electric cars is a welcome change for many, the loss of gas taxes is a concerning trend for law makers. In fact, some law makers have proposed levying a lifetime gas tax that would be the rough equivalent of all the gas tax that would have been collected, had it been a gas vehicle. As you can imagine, asking 100k miles of gas taxes when you buy an electric vehicle ($600–2000 depending on your state) isn’t popular with consumers or electric vehicle advocates.\nWhat can be done? There are a few possible solutions I’ll evaluate in this article. There are of course, other solutions out there that I haven’t considered. The key part of this article is that something must be done.\n“The maintenance of the infrastructure over its lifetime must be included in the cost. Not just building it the first time. But that’s not how it’s done. Developers pay to build (sometimes often only part, or worse city pays as an incentive) and then pawn the maintenance costs onto the city in perpetuity” — Michelle\nRoad Reduction This is as simple as it sounds. We stop building new roads and focus on the ones we have. This is difficult as cities fund the current roads with the increase in tax base from new roads. I hope that the one thing you’ve taken away from this article is that we cannot continually add new roads as it simply isn’t sustainable without serious changes. However, this also stops us from building new homes, which in many areas (Seattle for example) isn’t a popular option.\nNewer Materials There are two avenues to attack this from. Focusing on making the roads themselves use cheaper or less materials and making roads that need less maintenance or last longer. This largely an area of research by civic engineers and material scientists. There have been papers about solar roadways, self-healing concrete, new bio-materials for roads, and other tantalizing breakthroughs that seem just a few years away. There are two challenges with any road related improvement, namely scale and testing. There are over four million miles of road in the USA and we add around 20–40 thousand miles of road to our paved total each year. That doesn’t include road being repaired or re-laid, that is the number of new roads built year over year. The sheer scale of trying to deploying any sort of change at all in a fairly established processes is difficult in addition to convincing a city to even try a new way of paving a road.\nHyper Localized Cities + Public Transit There have been pushes and calls for the designing and building of new car-less cities. Just as Americans once looked towards Europe for a model of road stewardship and funding, we can look again for the future of roads. Many cities within Europe have a car-free zone, though often through necessity as the roads were not designed with cars in mind and widening the road just isn’t possible without bulldozing hundreds of years of history. Recently, town in Europe have began to experiment with reducing car access not by necessity but by choice . It’s early to tell what the results will be but so far the results from Pontevedra, a small city of 90,000 in Spain, are promising. Overall, the majority of the residents are happy with the transformation of a 90% reduction in car use throughout the historic city core. I sincerely hope that this model is continued to be iterated on and the learnings from Pontevedra will be applied elsewhere. However, this doesn’t solve the issue for smaller cities, suburbs, and rural areas as it focused on creating large pedestrian only areas and helping motorists going into the city leave their car outside the city.\nCredit: Council of Pontevedra\nAs mentioned for Florida counties, the trend is that more densely populated parts of the state with cities have fewer roads per person simply because there are more people using the same roads. A small trend of moving out of the suburbs and towards the city was popular among the younger generations but as they’re growing up, they are also starting to move out towards the suburbs. So we’re likely to continue to see this expansion into less dense cities and towns.\nSelf-driving cars The dream of self-driving cars is tantalizing. No longer do individuals need to own cars and don’t spend time sitting parked. This would lead to fewer cars needed to transport the same number of people in theory. However, the research from car sharing platforms, which are somewhat analogous to the self-driving system, has shown and increase in traffic and the number of cars on the roads. In addition, while they are fewer cars, they use just as many roads as we currently do. Self-driving cars are not the panacea that we really hope them to be.\nCorporate Maintenance Amusingly, Domino’s has gotten into the news spotlight for patching potholes in roads. Many companies rely on the roads that cities built to move goods and provide services to customers in suburbs. While it can be sobering to think that a pizza company is fixing the roads because local cities and counties are unable to, it can also be encouraging. While a future where roads and planned, built, and paid for based on the needs of businesses might not be the best for the city residents, it is a model that has been followed in several South American countries by banana companies (though not by choice). A simpler solution could be requiring developers to plan for road construction and maintenance of all new development for the next 30 years. Developers would hate this and claim it would put them out of business, but they would adapt. If a single city did this of course, developers would just stop building there. It needs to be a united effort.\nPaving for Pizza\nPublic Transportation As any tourist in Europe from the states can tell you, many cities have excellent train, metro, or high speed rail and are often heavily used by the locals. Bus systems can similarly be quite fantastic, but use the existing road infrastructure that already exists. While the decline of passenger by rail in the USA is a fascinating topic (and one I intend to write about in the future), there are some stark differences between Europe and the USA. The spread of cities into suburbs, the concentration of people, the sparsely populated regions, and the cost of the new construction all contribute roadblocks to switching to more integrated public transport solutions. In addition, the heavy lobbying by various automotive, airline, and petroleum industries have stymied many efforts to create high-speed rail routes.\nRoad Specific + New Taxes (before we need them) This is the most likely and probably the least dramatic course of action. Cities and other government bodies need to rethink how they pay for roads and how the current tax base can pay for the roads they have rather than relying on new growth. Something as simple as much, much higher car registration fees in addition to more careful consideration of road maintenance. One suggestion discussed would be a toll on every road. The roads with the most usage would also get more funding as it could be proportional to usage. However, this has several problems. Americans balk at tolls and if implemented poorly, it would be a decidedly regressive tax that would put a strain on lower-income and middle-class families that can’t afford it. In addition to the frustration of tolls, the backlash from the government tracking exactly where you drive your car (which can already be done by traffic cameras) as well as a perceived feeling of oppression means this will likely never get off the ground.\nNew forms of transportation I work on a large corporate campus and it’s difficult to get across quickly. There are corporate shuttles run by genuinely fantastic people who are often stretched too thin. Walking to a building across campus can take up to 30 to 40 minutes at a brisk pace. I vividly remember the posters and flyers distributed across campus when a popular brand of rental electric scooters advertised that they were going to start having scooters available soon. Co-workers viewed them with a decent amount of skepticism, particular some of the older coworkers with bad knees. However, as time went on, situations arose where myself and my coworkers had to rely on the scooters and they were convenient and fast. There are problems with electric scooters and they aren’t a good fit for many people. The point is that the idea of a scooter you can unlock with a phone in your pocket and ride off on for a couple bucks, drop off at your destination, then pick up a different one on your way back was somewhat of an impossible dream just ten years ago (the iPhone had just gotten an app store a year or two earlier). There are likely new forms of transportation that we haven’t even dreamed up yet. The key will be to embrace them when we see them.\nPhoto by Raul De Los Santos on Unsplash What next? That’s unfortunately, and fortunately, up to us. In the case of Michelle’s town, their focus is to grow their tax base by inviting businesses to build there so that they can afford to fix the roads with new streams of sales tax before they get too rough.\n“85% of the roads in our town are 15–20 years old. We need about $150 million to fix the 15% of old roads that are in total disrepair. Our entire non-debt payment annual budget is about $18 million. The average lifespan of a concrete roadway is 30 years. If we can’t fix our 15%, now, what are we going to do in 10 years when the 85% needs to be replaced. Right now we’re focused on trying to increase the tax base before we fall off that cliff.” — Michelle\nNot all cities can grow the tax base rapidly enough to pay for aging infrastructure. Compounding that with our current economic conditions from COVID-19 and it’s a recipe for another decade of neglected road maintenance. Now is the time to act, not the time to wait.\nYes we are a large country and the roads are useful and do much to enable to modern life that we all enjoy. However, looking at the stats comparing the USA to other countries and it’s quite clear that we have more roads per people than anywhere else.\nWhatever we as a country, states, cities, and citizens of the USA decide to do, the key is that we decide that it’s a problem before it becomes insurmountable. Make no mistake, this will require a fundamental shift in how we think about and pay for transportation.\nLinks for further reading City debt increasing: https://www.governing.com/topics/finance/gov-legacy-cities-bills-debt.html State debt is increasing: https://worldpopulationreview.com/state-rankings/debt-by-state Phoenix is having rough roads: https://www.azcentral.com/story/news/local/phoenix-traffic/2019/01/10/your-phoenix-street-repaved-next-year-check-here-road-maintenance-repairs/2357054002/ Frisco is in debt: https://communityimpact.com/dallas-fort-worth/frisco/government/2020/07/09/frisco-looking-to-issue-debt-for-water-sewer-road-projects/ City considers street fees: https://www.reporternews.com/story/news/2020/04/21/council-considers-temporary-street-maintenance-fee-suspension-lower-water-rates/2998459001/ Road Costs:\nEstimates per mile https://www.fdot.gov/programmanagement/Estimates/LRE/CostPerMileModels/CPMSummary.shtm Cost per mile https://www.roadbotics.com/2019/12/18/how-much-does-it-cost-to-pave-1-mile-of-road/ Roads and Debt https://www.strongtowns.org/journal/2014/9/17/roads-and-debt Who pays for road: https://uspirg.org/reports/usp/who-pays-roads https://www.bts.gov/content/public-road-and-street-mileage-united-states-type-surfacea Road materials:\nhttp://www.sciencebuzz.org/blog/asphalt-vs-concrete-potholes-aint-half-it https://pubsindex.trb.org/view.aspx?id=540297 History of Roads:\nhttps://www.thoughtco.com/history-of-american-roads-4077442 https://en.wikipedia.org/wiki/Federal\\_Highway\\_Administration https://en.wikipedia.org/wiki/History\\_of\\_road\\_transport Gas tax:\nhttps://www.eia.gov/tools/faqs/faq.php?id=23\u0026amp;t=10 http://www.tax-rates.org/washington/excise-tax#:~:text=Washington%20Gas%20Tax.%20The%20Washington%20excise%20tax%20on,is%20ranked%20%237%20out%20of%20the%2050%20states. https://en.wikipedia.org/wiki/Fuel\\_taxes\\_in\\_the\\_United\\_States Suburbs:\nHow suburbs changed America http://www.pbs.org/fmc/segments/progseg9.htm Suburbia https://edsitement.neh.gov/lesson-plans/building-suburbia-highways-and-housing-postwar-america Bicycles:\nhttps://monovisions.com/vintage-early-bicycles-in-the-19th-century-1850s-1890s/ ","permalink":"https://matthewc.dev/musings/no-roads-for-old-men/","summary":"Editors Note: Originally posted on medium by me\nAt least, the way things are going here in the USA. Our roads, as we currently handle them, are unsustainable. As city and states are financially stretched in different and difficult ways, the incredible cost of road maintenance and construction is getting harder and harder for policy makers and government workers to manage. At the rate we are going there will be significantly fewer roads, more traffic, higher taxes, and more pot holes than either you or I could imagine in our worse nightmares.","title":"Roads? Where we’re going, we won’t have any roads."}]