I just got back from spending 3 days of my time and more than $1000 to travel to and attend a conference. For my “side-project” Eidetica.

What can I say, I’m a tech geek with free time and I had some room in my budget.

I didn’t socialize/connect/chat very much as usual, but that wasn’t the point this time. I wanted to see and hear in person from other projects in the space.

I had already researched most of the projects on my own, but I did find a few gems that made the trip worthwhile. Additionally, seeing all these things made me more confident in my approach and plans for Eidetica. Most of these “sync engines” are thinking too small for my liking, or they’re trying too hard to productize.

If I remember I will come back through and add links to the posted talks in the future.

These are my cleaned up notes about some of the talks, you can see the full schedule: https://schedule.2025.syncconf.dev

DialogDB Link to heading

I think most people tuned out these guys’ presentation. It was late in the day, they did some weird skit, but what they said was great. I may be biased since their README and docs read almost identically to my own, with many of the same references. From a technical standpoint it is probably the most similar thing that I’ve seen to Eidetica.

They are using Datalog for queries, which I had never heard of before. It appears that it’s just their own internal way of handling the data and conflicts. It has nice properties, but it’s sort of an unnecessary constraint in this system and I am nearly certain I can implement what they are doing here as a custom Store implementation in Eidetica. Neat to learn about though; I’ll have to dig into it more. It does seem to have very good synergies with the decentralized architecture too.

Now the most interesting was their use of Prolly trees to store the Database info. I was already planning to use Prolly trees for storing objects; it enables efficient data sync in instances where there is partial data duplication with other existing objects. IPFS uses it for their object storage and hashing as well.

Using a variant of Prolly trees to store the actual Data though is a great idea that I had not thought of in this context. Finding places to apply those principles outside of just object storage is something I’ll need to think about some more and add to my toolkit.

Graft Link to heading

I had looked into Graft before, and I liked hearing about it more; however, it fails to meet my bar for decentralization. It’s technically interesting, and funny enough, is a backwards way of learning about how SQLite stores data on disk, but the concurrent writes story seems relatively poor.

It should work great if you have a single Writer and you want clients to efficiently read from the DB at the edge. It locally hydrates only the needed data from object storage, but my guess is that it struggles in any scenario with writes. It seems to mostly require a global write lock.

Jazz Link to heading

Jazz I had also looked into before, and the talk was on turning their Sync Engine into a Database, which is something I’ve been thinking about too. (See my recent post on CRDTs as databases.)

Turns out that their customers had realized it should be closer to a Database before Jazz themselves did and had requested features.

The main request seemed to be for some way of having global consistency locks. It seems like users want to, at least temporarily, turn the database into a live distributed database instead of a decentralized database. I’m not sure if their requests would be satisfied by the way I have implemented Transactions, but I’ve added this idea to my backlog to consider.

Actually, I believe that simplifies to a request for Users to be able to invalidate some data themselves, which is a mild problem. In the decentralized system, if you can just write something and say ’this is a global lock,’ you can in theory just reject/overwrite any data branches that disagree with that assertion. Planning along those lines are why I set up power levels in my authentication scheme.

For the most part though, I’ve been operating under the assumption that User data must not fail to merge, and that invalidating entries because of authentication or other reasons would be the sole responsibility of Eidetica. That lets me make certain simplifying assumptions during implementation.

Letting User data fail to validly merge is something I’d been thinking about though, but I’m not going to commit to it yet because there is already a workaround. If Users need to write their own “I failed to merge” logic into a custom Store (CRDT), then they can also just silently drop the data.

Ditto Link to heading

I don’t have much to say about Ditto. I had seen them before, and they’re the closest product to what I am building.

I think they were also the ones who mentioned having concepts for different types of nodes: servers/sparse clients, etc. This is also in my plans.

I do (and already have) recommend them to people for their use case in syncing data in flaky network environments.

Convex Link to heading

They send a deterministic TypeScript (why? why not WASM?) blob to their server to run against the database and return the results. They also track the inputs so they can cache the full query.

Eh, this looks like it’s just query-based requests. From what I understand, it’s similar to GraphQL and is something I’d already been thinking about how to do.

For my purposes I can’t always assume that the remote server has access to the underlying data since it might be E2EE. My plan was to update my Sync code to support a set of requests for which Entries to pull: “All children of X,” “The most recent YY Entries for Z Store,” etc. Better requests based on the data may also be something I consider later.

Notion Offline Link to heading

Notion built their own CRDT to store their block data to allow for offline editing. Makes sense, it’s good to know. I enjoyed their talk describing how they did it. Was probably very useful for people who had not seen how CRDTs are built.

SQLite Persistence Link to heading

An entire talk about how you can store SQLite objects in a browser client. This is how I planned to implement my WASM build for Eidetica, but I’ll learn it from docs.

I enjoyed the talk though, and learned a good amount, but I don’t have lots of notes.