Aut inveniam viam aut faciam: I shall either find a way or make one.
Another ranking of people on GitHub dropped, and for the odd metric of 'stars from repos where a developer has merged PRs' I rank third, as of this writing, in the United States. Like all rankings, it's mostly lies, statistics, coincidence, and a reflection of GitHub's top-heavy usage.
But it's a nice metric because that is what I've focused on for the last few years: instead of trying to create some popular new framework, I've been trying to contribute more to existing projects.
So, free advice to the new open source contributor: when you hit a bug or a limitation in some project, file an issue and volunteer to fix it if you think you can.
You'll learn a lot from working in lots of projects - how do they set up tests and linter rules, what are their code styles, etc?
Don't use LLMs to do this. Using an LLM especially to write the PR description or anything like that cheats both you and them: you're missing out on the learning and experience, and they will become wary of automated contributions. Don't be lazy.
Is this still an effective way to stay in open source and do good: I think so. So, when you hit a bug, instead of doing a workaround, a patch, or switching tools, try and make a way by fixing it. Treat all bugs like your responsibility because you're an active community member.
It's been three months since the last Effect devlog and I'm still incrementally adopting Effect in Val Town.
Things are going well but not spectacularly. My approval rating is a solid 'B' right now.
I'm far from the only or most important Effect user, but I'm bummed that a majority of my annoyances from October & November of 2025 are outstanding: a drizzle bug, a Cron bug, a vitest incompatibility, documentation improvements are all stalled. I write a brief fork that implemented Cron iteration in the reverse direction, Cron.prev, and Kit finished and merged it, but that took three months to get merged and has been stalled without release for another three months. The documentation for Effect.fn.Return got written by a core team member but that PR got closed without merging. I tried writing some docs for stream interop which never got merged or reviewed. I got a minor documentation improvement about generators merged with explicit coordination from the Effect team.
None of that stuff is a dealbreaker and I know from privately chatting with the Effect team that they prefer PRs to be coordinated because there's so much in flight and so much to understand. I think the current scenario, in which people don't know that because it isn't written anywhere, and put in effort into PRs that stall, isn't very good for community vibes.
Probably some of this delay is because Effect v4 has been a major focus of the Effectful team. v4 does seem exciting: a smaller, more unified, faster module is great news. We haven't migrated yet because we use some of the deprecated APIs, like Runtime, and I try to avoid using beta releases in general in production software.
Obligatory LLM discourse: my usual AI tool, Claude Code with Opus 4.6, is decent at using Effect but stumbles in the same places that the documentation is lacking, like that Effect.fn.Return type - it doesn't like to use that. I've used the LLM for roughly half of the refactors to Effect, but recently I've been finding it slower than doing things manually along with ast-grep. There are faster LLMs, but I've found quality and speed to be strongly inversely correlated, and fast models like Minimax tend to get themselves into corners faster.
Effect joys
The Duration and DateTime modules have been really nice for describing times and limits in the application. Soon, Duration will do a lot of the same things that those Effect utilities can do, but it's nice to have them a little early.
We have a lot of Drizzle queries that try to fetch one record, so .limit(1) and then .pipe(Effect.map(r => r.at(0)) and I recently created a nice little dual method as a helper. This was not especially easy to write!
/**
* For the many many database queries where we want to take the first
* element and return NotFoundError if it is not found.
*
* @example
* db.query(...)
* .pipe(takeFirst('Project not found'));
*/
export const takeFirst = dual<
(
that: string
) => <T, Error, Requirements>(
self: Effect.Effect<T[], Error, Requirements>
) => Effect.Effect<NonNullable<T>, Error | NotFoundError, Requirements>,
<T, Error, Requirements>(
self: Effect.Effect<T[], Error, Requirements>,
that: string
) => Effect.Effect<NonNullable<T>, Error | NotFoundError, Requirements>
>(2, (self, message = "Not found") => {
return Effect.flatMap(self, (value) => {
const first = value.at(0);
if (first !== undefined && first !== null) {
return Effect.succeed(first);
}
return Effect.fail(new NotFoundError({ message }));
});
});The more methods get ported to Effect the more I can use Effect.gen or Effect.fn to combine them, which is nice - feels like a tipping point in many ways. This is something that I have noticed that LLMs are hesitant to do: they're pretty single-minded when working on a task and will happy let two Effect.runPromise statements sit on consecutive lines when they could be combined.
The friction of Effect at boundaries is still there: Val Town uses Fastify, tRPC, React Router, etc., and we have a lot of existing code, so we aren't achieving the brilliant purity that Effect docs insist upon. Tests still don't use @effect/vitest because of its missing feature and lack of vitest 4 support, so there's a lot of unwrapping Effects there too.
Overall: it rolls on, and I'm starting to slowly introduce Effect to the tough core of the application, the part that actually runs vals, and has many complex asynchronous flows. That is going fairly well: the last push was to replace our homemade implementation of Promise.withResolvers() that also had a Bun-inspired .peek() method to synchronously get a Promise value if there is one. Effect's Deferred was a drop-in replacement for that problem area.
What's new with Placemark and open source recently:
I've been trying to 'fix what I find' in projects - small fixes in stuff like fastify-sensible and 11ty-vento. Contributing to existing projects in small ways feels good, I think it would be nice if the average longevity of projects were higher, and small contributions are what keep me interested in my older projects, too.
Lisa Charlotte Muth just coined ROOTS: "Return Old Online Things to your own Site." She's collecting all her old content and putting it all on her site.
Good idea! I share all the sentiments in that post, and hope to do the same, and also do some manual review of my old posts to fix some bitrotted iframe embeds. Inlining everything would be really nice - I want some self-hostable local playground element for interactive code examples.
The current wave of AI discourse is what I'd call "Radical AI Centrism." The gist is:
I've been tweaking my media diet recently for three goals:
I already pay for a few big newspapers and use uBlock with my browser Helium, but I still get ads on the mobile versions of news apps like the New York Times and Bloomberg. Considering how much these subscriptions cost, I think that's pretty silly.
And it's very clear that those papers haven't been writing honest coverage of some important world events.
So, here's what I'm listening/reading/watching more often:
Some technical things that I feel like I've never really figured out, that I'm trying to figure out:
Failing fast feels right and I've implemented it in a lot of places - using envsafe or something similar is a necessity on any project I work on, for example: if an application isn't properly configured, it should fail at startup instead of limping along.
But should applications tolerate failed database queries in an elegant way? What about failed external services?
I think one clear line is that an application shouldn't allow internal inconsistency. For example, if you have some function that's being called with an incorrect argument type, you update the callers instead of making the function more flexible. This probably isn't the case when companies grow in size because eventually you can't tell a whole team to just update all their code when an API changes.
But the line keeps moving: in particular, I think the last two years has shown me that it's useful to have a system that can fail partially, and that every single external service will fail at some point, and you should have a plan for those things, whether it's tolerating failure or doing retries.
Every application that I've worked on eventually just generates several 'flavors' of log messages out of stdout and stderr and logs stop being useful because they're filled with 'junk' like request logs.
I've tried structured JSON logs with pino, tried tslog, Betterstack, Axiom, and never got it. We've never had a team member that really got value out of logs. I've never really gotten value out of logs. I often wonder if servers should emit logs at all, and instead we should just do telemetry and metrics?
I've changed my mind about a lot of stuff recently:
Honestly, I've been running a little low on passion for side-projects and true deep dives lately. Lots of reasons, most of which you can probably guess if you also live in the US or work in tech.
But I'm still pretty obsessed with graph layout. Basically I love d3-force - force-directed layout, but I think that it's used everywhere and isn't the right choice for everything. And graph layout is catnip to computer scientists so there's a ton of cool research being written about alternative algorithms.

I think this graph could be better. That's a real load-bearing could though: Mike's work on d3 is big for a reason: he focused on and solved a lot of really hard problems that others were scared of.

But on the New York subway, I see their subway maps and I am transfixed. Hand-drawn charts look different than what a computer can generate. Old charts are just amazing.
There are a bunch of things about beautiful charts that I appreciate:
So I've been reading some papers.
I liked A Walk On The Wild Side: a Shape-First Methodology for Orthogonal Drawings. Some of the takeways:
But so far my favorite is HOLA: Human-Like Orthogonal Network Layout. The results look spectacular. The gist is:
Sidenote: when reading through the ogdf documentation I saw that they use earcut, made by Mapbox! Cool to see that foundational work like that is so widely adopted, and liberal open source licensing makes it possible.
HOLA was written in 2015, so I went looking for more recent work, and found Praline which has a Java implementation by the authors.
And then A Simple Pipeline for Orthogonal Graph Drawing, which cites PRALINE and HOLA as examples and has really nice output. I was hopeful that the 'simple' in that paper meant that it was simple to implement, which… not sure. There's a Scala implementation.
It's amazing to me that some of this really cutting-edge work sits in repositories on GitHub with 6 stars (one of them mine) when they represent so much real thought and effort. Of course the real product is the paper, and the real reward is PhDs, tenure, respect of their peers, and citations. But still!
Then the same authors as "A Simple Pipeline" - Tim Hegemann and Alexander Wolff - wrote Storylines with a Protagonist which as an online demo which implements a lot of the nice parts of subway-map drawing!
I'm having fun following along with fancy graph-drawing algorithms! Some questions that I am looking to answer next:
TEXT for all text stuff, and citext for case-insensitive text. There is no advantage to char or varchar, avoid them.bytea is good.created_at column that defaults to NOW(). You'll need it eventually.servicename_base58-check-encoded-random-bytes is good.