Skip to content

replikativ/konserve

Repository files navigation

konserve

https://img.shields.io/badge/slack-join_chat-brightgreen.svg https://img.shields.io/clojars/v/io.replikativ/konserve.svg https://circleci.com/gh/replikativ/konserve.svg?style=shield https://img.shields.io/github/last-commit/replikativ/konserve/main.svg

Simple durability, made flexible.

A simple document store protocol defined with synchronous and core.async semantics to allow Clojuresque collection operations on associative key-value stores, both from Clojure and ClojureScript for different backends. Data is generally serialized with edn semantics or, if supported, as native binary blobs and can be accessed similarly to clojure.core functions get-in, assoc-in and update-in. update-in especially allows to run functions atomically and returns old and new value. Each operation is run atomically and must be consistent (in fact ACID), but further consistency is not supported (Riak, CouchDB and many scalable solutions don’t have transactions over keys for that reason). This is meant to be a building block for more sophisticated storage solutions (Datomic also builds on kv-stores). A simple append-log for fast write operations is also implemented.

Features

  • cross-platform between Clojure and ClojureScript
  • lowest-common denominator interface for an associative datastructure with edn semantics
  • thread-safety with atomicity over key operations
  • consistent error handling for core.async
  • fast serialization options (fressian, transit, …), independent of the underlying kv-store
  • very low overhead protocol, including direct binary access for high throughput
  • no additional dependencies and setup required for IndexedDB in the browser and the file backend on the JVM
  • avoids blocking io, the filestore for instance will not block any thread on reading. Fully asynchronous support for writing and other stores is in the pipeline.

Garbage Collector

Konserve has a garbage collector that can be called manually when the store gets too crowded. For that, the function konserve.gc/sweep! allows you to provide a cut-off date to evict old keys and a whitelist for keys that should be kept.

Error handling

For synchronous execution normal exceptions will be thrown. For asynchronous error handling we follow the semantics of go-try and <? introduced here. We have the superv.async library around the error handling in core.async, but since there is no need to push it onto the users of konserve, you just need these two macros that properly handle the errors. <? needs to check for an exception and rethrow and go-try needs to catch and pass it along as a return value such that it does not get lost.

Write Hooks

Konserve supports write hooks that are invoked after every successful write operation. This enables reactive patterns like store synchronization, change logging, or triggering side effects without wrapping the store.

(require '[konserve.core :as k]
         '[konserve.memory :refer [new-mem-store]])

(def store (new-mem-store (atom {}) {:sync? true}))

;; Register a hook to log all writes
(k/add-write-hook! store ::my-logger
  (fn [{:keys [api-op key value]}]
    (println "Write:" api-op key "->" value)))

;; Writes now trigger the hook
(k/assoc-in store [:user] {:name "Alice"} {:sync? true})
;; Prints: Write: :assoc-in :user -> {:name "Alice"}

;; Remove hook when done
(k/remove-write-hook! store ::my-logger)

The hook function receives a map with these keys:

  • :api-op - The operation (:assoc-in, :update-in, :dissoc, :bassoc, :multi-assoc, :multi-dissoc)
  • :key - The top-level key being written
  • :key-vec - Full key path (for assoc-in / update-in)
  • :value - The value written
  • :old-value - Previous value (for update operations)
  • :kvs - Map of key->value (for multi-assoc)
  • :keys - Collection of keys (for multi-dissoc)

Hooks are invoked at the API layer (in konserve.core), so they work consistently across all store backends. Stores must implement the PWriteHookStore protocol; the built-in memory store and default file store both support hooks.

Usage

Add to your dependencies: [![Clojars Project](https://img.shields.io/clojars/v/io.replikativ/konserve.svg)](https://clojars.org/io.replikativ/konserve)

Synchronous Execution

Run the following synchronous code if you are not using core.async in your scope:

(ns test-db
  (:require [konserve.filestore :refer [connect-fs-store]]
            [konserve.core :as k]))

(def store (connect-fs-store "/tmp/store" :opts {:sync? true}))

(k/assoc-in store ["foo" :bar] {:foo "baz"} {:sync? true})
(k/get-in store ["foo"] nil {:sync? true})
(k/exists? store "foo" {:sync? true})

(k/assoc-in store [:bar] 42 {:sync? true})
(k/update-in store [:bar] inc {:sync? true})
(k/get-in store [:bar] nil {:sync? true})
(k/dissoc store :bar {:sync? true})

(k/append store :error-log {:type :horrible} {:sync? true})
(k/log store :error-log {:sync? true})

(let [ba (byte-array (* 10 1024 1024) (byte 42))]
  (time (k/bassoc store "banana" ba {:sync? true})))

(k/bget store "banana"
        (fn [{is :input-stream}]
          (your-read-does-all-work-here is))
        {:sync? true})

Asynchronous Execution

In a ClojureScript REPL you can evaluate the expressions from the REPL each wrapped in a go-block.

(ns test-db
  (:require [konserve.memory :refer [new-mem-store]]
            [clojure.core.async :refer [go <!]]))

(go (def my-db (<! (new-mem-store)))) ;; or (go (def my-db (<!

From a Clojure REPL run the following functions for the core.async variants of the code.

(ns test-db
  (:require [konserve.filestore :refer [connect-fs-store]]
            [konserve.core :as k]
            [clojure.core.async :refer [go <!]]))

(go
  (def store (<! (connect-fs-store "/tmp/store")))

  (<! (k/assoc-in store ["foo" :bar] {:foo "baz"}))
  (<! (k/get-in store ["foo"]))
  (<! (k/exists? store "foo"))

  (<! (k/assoc-in store [:bar] 42))
  (<! (k/update-in store [:bar] inc))
  (<! (k/get-in store [:bar]))
  (<! (k/dissoc store :bar))

  (<! (k/append store :error-log {:type :horrible}))
  (<! (k/log store :error-log))

  (let [ba (byte-array (* 10 1024 1024) (byte 42))]
    (time (<! (k/bassoc store "banana" ba)))))

Supported Backends

In-Memory Store

For simple purposes a memory store wrapping an Atom is implemented for Clojure and ClojureScript.

Usage:

(ns test-db
  (:require [konserve.memory :refer [new-mem-store]]
            [konserve.core :as k]))

(def my-db (new-mem-store))

fs-store

A file-system store in Clojure and for Node are provided as elementary reference implementations for the two most important platforms. No setup and no additional dependencies are needed.

The file-system store currently uses fressian in Clojure and fress in ClojureScript and is quite efficient. Both implementations use the same on-disk format and can load the same store (but not concurrently). It also allows to access values as a normal file-system file, e.g. to open it with a native database like HDF5 in Java. You can decide not to fsync on every write by a configuration of {:sync-blob? false}, if a potential, but unlikely data loss is not critical for you (e.g. for a session store). Note that the database will not be corrupted in this case, you can just lose some write operations before the crash.

Usage:

(ns test-db
  (:require [#?(:clj  konserve.filestore
                :cljs konserve.node-filestore) :refer [connect-fs-store]]
            [konserve.core :as k]))

(def my-folder "path/to/folder")
(def my-db (connect-fs-store my-folder))

IndexedDB

IndexedDB is provided as reference implementation for ClojureScript browser backends. The IndexedDB store is restricted to the async api only.

Usage:

(ns test-db
  (:require [clojure.core.async :refer [go <!]]
            [konserve.indexeddb :refer [connect-idb-store]]
            [konserve.core :as k]))

(go
  (def my-idb-store (<! (connect-idb-store "example-db")))

  ;; Regular operations
  (<! (k/assoc-in my-idb-store [:user] {:name "Alice" :age 30}))
  (<! (k/get-in my-idb-store [:user]))

  ;; Multi-key atomic operations
  (<! (k/multi-assoc my-idb-store {:user1 {:name "Alice"}
                                   :user2 {:name "Bob"}}))

  ;; Efficient bulk retrieval - returns sparse map of found keys
  (<! (k/multi-get my-idb-store [:user1 :user2 :nonexistent]))
  ;; => {:user1 {:name "Alice"} :user2 {:name "Bob"}}

  ;; Atomic bulk delete
  (<! (k/multi-dissoc my-idb-store [:user1 :user2])))

The IndexedDB implementation supports atomic multi-key operations (multi-get, multi-assoc, multi-dissoc) through IndexedDB’s native transaction model. All operations in a single call either succeed or fail together. multi-get returns a sparse map containing only found keys, which is efficient for bulk retrieval during initialization.

External Backends

We recently updated konserve not only to provide the option to choose between synchronous and asynchronous execution, but also to provide protocols that simplify the implementation of external backends. Unfortunately most of the external backends are deprecated now because of this.

Tiered Store

Konserve supports tiered storage with a frontend cache layer and backend persistence layer. The tiered store combines a fast frontend store (e.g., in-memory) with a durable backend store (e.g., filesystem) to optimize for both performance and persistence.

Usage:

(ns test-db
  (:require [konserve.tiered :refer [connect-tiered-store]]
            [konserve.memory :refer [new-mem-store]]
            [konserve.filestore :refer [connect-fs-store]]
            [clojure.core.async :refer [go <!]]))

(go
  (def frontend-store (<! (new-mem-store)))
  (def backend-store (<! (connect-fs-store "/tmp/store")))
  (def tiered-store (<! (connect-tiered-store frontend-store backend-store
                                              :write-policy :write-through
                                              :read-policy :frontend-first))))

Write policies:

  • :write-through - Write to backend, then frontend synchronously
  • :write-around - Write only to backend, invalidate frontend

Read policies:

  • :frontend-first - Check frontend first, fallback to backend (populates frontend)
  • :frontend-only - Only read from frontend

The tiered store also supports synchronization between layers and multi-key operations (multi-get, multi-assoc, multi-dissoc) when both stores support them. During initialization, multi-get combined with multi-assoc enables efficient bulk sync from backend to frontend.

Supported backends

Please let us know if you are interested in other backends or if you need help with implementing one.

Unofficial backends

Outdated backends

The following projects are incompatible with the latest konserve release, but describe the usage of the underlying store API and could still be helpful to implement new backends for the underlying store:

Serialization formats

Different formats for edn serialization like fressian, transit or a simple pr-str version are supported and can be combined with different stores. Stores have a reasonable default setting. You can also extend the serialization protocol to other formats if you need it. You can provide incognito support for records, if you need them.

Tagged Literals

You can read and write custom records according to incognito.

Compression and encryption

Compression and encryption are supported by the default store implementation that is used by all current backends. They can be activated in the store configuration as follows:

{:encryptor {:type :aes
             :key "s3cr3t"}
 :compressor {:type :lz4}}

LZ4 compression is currently only supported on the JVM. AES encryption is supported on both JVM and JS targets with the same cold storage format, i.e. the same store can be read and written from Clojure and ClojureScript runtimes. We use AES/CBC/PKCS{5/7}Padding with 256 bit and a different salt for each written value.

Backend implementation guide

We provide a backend implementation guide.

Projects building on konserve

  • The protocol is used in production and originates as an elementary storage protocol for replikativ and datahike.
  • kampbell maps collections of entities to konserve and enforces specs.

Combined usage with other writers

konserve assumes currently that it accesses its keyspace in the store exclusively. It uses hasch to support arbitrary edn keys and hence does not normally clash with outside usage even when the same keys are used. To support multiple konserve clients in the store the backend has to support locking and proper transactions on keys internally, which is the case for backends like CouchDB, Redis and Riak.

License

Copyright © 2014-2025 Christian Weilbach and contributors

Distributed under the Eclipse Public License either version 1.0 or (at your option) any later version.

About

A clojuresque key-value/document store protocol with core.async.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 14

Languages