Just a Theory

By David E. Wheeler

🛠️ PGXN Tools v1.7

Today I released v1.7.0 of the pgxn-tools OCI image, which simplifies Postgres extension testing and PGXN distribution. The new version includes just a few updates and improvements:

  • Upgraded the Debian base image from Bookworm to Trixie
  • Set the PGUSER environment variable to postgres in the Dockerfile, removing the need for users to remember to do it.
  • Updated pg-build-test to set MAKEFLAGS="-j $(nprocs)" to shorten build runtimes.
  • Also updated pgrx-build-test to pass -j $(nprocs), for the same reason.
  • Upgraded the pgrx test extension to v0.16.1 and test it on Postgres versions 13-16.

Just a security and quality of coding life release. Ideally existing workflows will continue to work as they always have.

Welcome dmjwk

Please welcome dmjwk into the world. This “demo JWK” (or “dumb JWK” if you like) service provides super simple Identity Provider APIs strictly for demo purposes.

Say you’ve written a service that depends on a public JSON Web Key (JWK) set to authenticate JSON Web Tokens (JWT) submitted as OAuth 2 Bearer Tokens. Your users will normally configure the service to use an internal or well-known provider, such as Auth0, Okta, or AWS. Such providers might be too heavyweight for demo purposes, however.

For my own use, I needed nothing more than a Docker Compose file with local-only services. I also wanted some control over the contents of the tokens, since my records the sub field from the JWT in an audit trail, and something like 1a1077e6-3b87-1282-789c-f70e66dab825 (as in Vault JWTs) makes for less-than-friendly text to describe in a demo.

I created dmjwk to scratch this itch. It provides a basic Resource Owner Password Credentials Grant OAuth 2 flow to create custom JWTs, a well-known URL for the public JWK set, and a simple API that validates JWTs. None of it is real, it’s all for show, but the show’s the point.

Quick Start

The simplest way to start dmjwk is with its OCI image (there are binaries for 40 platforms, as well). It starts on port 443, since hosts commonly reserve that port, let’s map it to 4433 instead:

docker run -d -p 4433:443 --name dmjwk --volume .:/etc/dmjwk ghcr.io/theory/dmjwk

This command fires up dmjwk with a self-signed TLS certificate for localhost and creates a root cert bundle, ca.pem, in the current directory. Use it with your favorite HTTP client to make validated requests.

JWK Set

For example, to fetch the JWK set:

curl -s --cacert ca.pem https://localhost:4433/.well-known/jwks.json

By default dmjwk creates a single JWK in the set that looks something like this (JSON reformatted):

{
  "keys": [
    {
      "kty": "EC",
      "crv": "P-256",
      "x": "Ld98DHMIIanlpdOhYf-8GljNHnxHW_i6Bq0iltw9J98",
      "y": "xxyRGhCFIjdQFD-TAs-y6uf18wsPvkq8wH_FsGY1GyU"
    }
  ]
}

Configure services to use this URL, https://localhost:4433/.well-known/jwks.json, to to validate JWTs created by dmjwk.

Authorization

To fetch a JWT signed by the first key in the JWK set (just the one in this example), make an application/x-www-form-urlencoded POST with the required grant_type, username, and password fields:

form='grant_type=password&username=kamala&password=a2FtYWxh'
curl -s --cacert ca.pem -d "$form" https://localhost:4433/authorization

dmjwk stores no actual usernames and passwords; it’s all for show. Provide any username you like and Base64-encode the username, without trailing equal signs, as the password.

Example successful response:

{
  "access_token": "eyJhbGciOiJFUzI1NiIsImtpZCI6IiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJrYW1hbGEiLCJleHAiOjE3NjY5NDQyNzcsImlhdCI6MTc2Njk0MDY3NywianRpIjoiZ3hhNnNib292aTg5dSJ9.04efdORHDA3GIPMnWErMPy4mXXsBfbnMJlzqZsxGVEc2cRvEWI0Mt_IqHDK4RYK_14BCEu2nTMiEPtgwC2IZ5A",
  "token_type": "Bearer",
  "expires_in": 3600,
  "scope": "read"
}

Parsing the the access_token JWT from the response provides this header:

{
  "alg": "ES256",
  "kid": "",
  "typ": "JWT"
}

And this payload:

{
  "sub": "kamala",
  "exp": 1766944277,
  "iat": 1766940677,
  "jti": "gxa6sboovi89u"
}

We can further customize its contents by passing any of a few additional parameters. To specify an audience and issuer, for example:

form='grant_type=password&username=kamala&password=a2FtYWxh&iss=spacely+sprockets&aud=cogswell.cogs'
curl -s --cacert ca.pem -d "$form" https://localhost:4433/authorization

Which returns something like:

{
  "access_token": "eyJhbGciOiJFUzI1NiIsImtpZCI6IiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzcGFjZWx5IHNwcm9ja2V0cyIsInN1YiI6ImthbWFsYSIsImF1ZCI6WyJjb2dzd2VsbC5jb2dzIl0sImV4cCI6MTc2NzAzNDIyNCwiaWF0IjoxNzY3MDMwNjI0LCJqdGkiOiIxNXZmaDhzYm41YWFxIn0.IGRdD5HGiWLOXggZhb9zPlLK40WWy8R0-HmSuIhaObD6WEwA2WXIBWg_MqtFFQISKLXrjNDHphXtEJsx6FZBOQ",
  "token_type": "Bearer",
  "expires_in": 3600,
  "scope": "read"
}

Now the JWT payload is:

{
  "iss": "spacely sprockets",
  "sub": "kamala",
  "aud": [
    "cogswell.cogs"
  ],
  "exp": 1767034206,
  "iat": 1767030606,
  "jti": "8ri9vfsg5f8mj"
}

This allows customization appropriate for your service, which might determine authorization based on the contents of the various JWT fields.

A request that fails to authenticate the username and password, e.g.:

form='grant_type=password&username=kamala&password=nope'
curl -s --cacert ca.pem -d "$form" https://localhost:4433/authorization

Will return an appropriate response:

{
  "error": "invalid_request",
  "error_description": "incorrect password"
}

Resource

For simple JWT validation, POST a JWT returned from the authorization API as a Bearer token to /resource:

tok=$(curl -s --cacert ca.pem -d "$form" https://localhost:4433/authorization | jq -r .access_token)
curl -s --cacert ca.pem -H "Authorization: Bearer $tok" https://localhost:4433/resource -d 'HELLO WORLD
'

The response simply returns the request body:

HELLO WORLD

A request that fails to authenticate, say with an invalid Bearer token:

curl -s --cacert ca.pem -H "Authorization: Bearer NOT" https://localhost:4433/resource -d 'HELLO WORLD'

Returns an appropriate error response:

{
  "error": "invalid_token",
  "error_description": "token is malformed: token contains an invalid number of segments"
}

That’s It

dmjwk includes a fair number of configuration options, including external certificates, custom host naming (useful with Docker Compose), and multiple key generation. If you find it useful for your demos (but not for production — DON’T DO THAT) — let me know. And if not, that’s fine, too. This is a bit of my pursuit of a thick desire, made mainly for me, but it pleases me if others find it helpful too.

🐏 Taming PostgreSQL GUC “extra” Data

New post up on on the ClickHouse blog:

I wanted to optimize away parsing the key/value pairs from the pg_clickhouse pg_clickhouse.session_settings GUC for every query by pre-parsing it on assignment and assigning it to a separate variable. It took a few tries, as the GUC API requires quite specific memory allocation for extra data to work properly. It took me a few tries to land on a workable and correct solution.

Struggling to understand, making missteps, and ultimately coming to a reasonable design and solution satisfies me so immensely that I always want to share. This piece gets down in the C coding weeds; my fellow extension coders might enjoy it.

Introducing pg_clickhouse

PostgreSQL Logo ⇔ pg_clickhouse ⇔ ClickHouse Logo

The ClickHouse blog has a posted a piece by yours truly introducing pg_clickhouse, a PostgreSQL extension to run ClickHouse queries from PostgreSQL:

While clickhouse_fdw and its predecessor, postgres_fdw, provided the foundation for our FDW, we set out to modernize the code & build process, to fix bugs & address shortcomings, and to engineer into a complete product featuring near universal pushdown for analytics queries and aggregations.

Such advances include:

  • Adopting standard PGXS build pipeline for PostgreSQL extensions
  • Adding prepared INSERT support to and adopting the latest supported
  • release of the ClickHouse C++ library
  • Creating test cases and CI workflows to ensure it works on PostgreSQL versions 13-18 and ClickHouse versions 22-25
  • Support for TLS-based connections for both the binary protocol and the HTTP API, required for ClickHouse Cloud
  • Bool, Decimal, and JSON support
  • Transparent aggregate function pushdown, including for ordered-set aggregates like percentile_cont()
  • SEMI JOIN pushdown

I’ve spent most of the last couple months working on this project, learning a ton about ClickHouse, foreign data wrappers, C and C++, and query pushdown. Interested? Try ou the Docker image:

docker run --name pg_clickhouse -e POSTGRES_PASSWORD=my_pass \
       -d ghcr.io/clickhouse/pg_clickhouse:18
docker exec -it pg_clickhouse psql -U postgres -c 'CREATE EXTENSION pg_clickhouse'

Or install it from PGXN (requires C and C++ build tools, cmake, and the openssl libs, libcurl, and libuuid):

pgxn install pg_clickhouse

Or download it and build it yourself from:

Let me know what you think!

Sqitch 1.6.0: Now with ClickHouse!

ClickHouse Logo, A ❤️, Sqitch Logo

Out today: Sqitch v1.6.0. This release adds a brand new engine: ClickHouse. I started a new job at ClickHouse on September 2, and my first task, as a way to get to know the database, was to add it to Sqitch. Fortuitously, ClickHouse added support for updates and deletes, which Sqitch requires, in the August release. Sqitch v1.6.0 therefore supports ClickHouse 25.8 or later.

As for the other engines Sqitch supports, this release includes a ClickHouse tutorial, the --with-clickhouse-support option in the Homebrew tap, and Sqitch ClickHouse Docker tags.

Find it in the usual places:

Thanks for using Sqitch, and do let me know if you use it to manage a ClickHouse database, or if you run into any issues or challenges.

Postgres Extensions: Use PG_MODULE_MAGIC_EXT

A quick note for PostgreSQL extension maintainers: PostgreSQL 18 introduces a new macro: PG_MODULE_MAGIC_EXT. Use it to name and version your modules. Where your module .c file likely has:

PG_MODULE_MAGIC;

Or:

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

Change it to something like:

#ifdef PG_MODULE_MAGIC_EXT
PG_MODULE_MAGIC_EXT(.name = "module_name", .version = "1.2.3");
#else
PG_MODULE_MAGIC;
#endif

Replace the name of your module and the version as appropriate. Note that PG_MODULE_MAGIC was added in Postgres 8.2; if for some reason your module still supports earlier versions, use a nested #ifdef to conditionally execute it:

#ifdef PG_MODULE_MAGIC_EXT
PG_MODULE_MAGIC_EXT(.name = "module_name", .version = "1.2.3");
#else
#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif
#endif

If you manage the module version in your Makefile, as the PGXN Howto suggests, consider renaming the .c file to .c.in and changing the Makefile like so:

  • Replace .version = "1.2.3" with .version = "__VERSION__"

  • Add src/$(EXTENSION).c to EXTRA_CLEAN

  • Add this make target:

    src/$(EXTENSION).c: src/$(EXTENSION).c.in
    	sed -e 's,__VERSION__,$(EXTVERSION),g' $< > $@
    
  • If you use Git, add /src/*.c to .gitignore

For an example of this pattern, see semver@3526789.

That’s all!

Adventures in Extension Packaging

I gave a presentation at PGConf.dev last week, Adventures in Extension Packaging. It summarizes stuff I learned in the past year in developing the PGXN Meta v2 RFC, re-packaging all of the extensions on pgt.dev, and experimenting with the CloudNativePG community’s proposal to mount extension OCI images in immutable PostgreSQL containers.

Turns out a ton of work and experimentation remains to be done.

Previous work covers the first half of the talk, including:

The rest of the talk encompasses newer work. Read on for details.

Automated Packaging Challenges

Back in December I took over maintenance of the Trunk registry, a.k.a., pgt.dev, refactoring and upgrading all 200+ extensions and adding Postgres 17 builds. This experience opened my eyes to the wide variety of extension build patterns and configurations, even when supporting a single OS (Ubuntu 22.04 “Jammy”). Some examples:

  • pglogical requires an extra make param to build on PostgreSQL 17: make -C LDFLAGS_EX="-L/usr/lib/postgresql/17/lib"
  • Some pgrx extensions require additional params, for example:
  • pljava needs a pointer to libjvm: mvn clean install -Dpljava.libjvmdefault=/usr/lib/x86_64-linux-gnu/libjvm.so
  • plrust needs files to be moved around, a shell script to be run, and to be built from a subdirectory
  • bson also needs files to be moved around and a pointer to libbson
  • timescale requires an environment variable and shell script to run before building
  • Many extensions require patching to build for various configurations and OSes, like this tweak to build pguri on Postgres 17 and this patch to get duckdb_fdw to build at all

Doubtless there’s much more. These sorts of challenges led the RPM and APT packaging systems to support explicit scripting and patches for every package. I don’t think it would be sensible to support build scripting in the meta spec.

However, the PGXN meta SDK I developed last year supports the merging of multiple META.json files, so that downstream packagers could maintain files with additional configurations, including explicit build steps or lists of packages, to support these use cases.

Furthermore, the plan to add reporting to PGXN v2 means that downstream packages could report build failures, which would appear on PGXN, where they’d encourage some maintainers, at least, to fix issues within their control.

Dependency Resolution

Dependencies present another challenge. The v2 spec supports third party dependencies — those not part of Postgres itself or the ecosystem of extensions. Ideally, an extension like pguri would define its dependence on the uriparser library like so:

{
  "dependencies": {
    "postgres": { "version": ">= 9.3" },
    "packages": {
      "build": {
        "requires": {
          "pkg:generic/uriparser": 0,
        }
      }
    }
  }
}

An intelligent build client will parse the dependencies, provided as purls, to determine the appropriate OS packages to install to satisfy. For example, building on a Debian-based system, it would know to install liburiparser-dev to build the extension and require liburiparser1 to run it.

With the aim to support multiple OSes and versions — not to mention Postgres versions — the proposed PGXN binary registry would experience quite the combinatorial explosion to support all possible dependencies on all possible OSes and versions. While I propose to start simple (Linux and macOS, Postgres 14-18) and gradually grow, it could quickly get quite cumbersome.

So much so that I can practically hear Christoph’s and Devrim’s reactions from here:

Photo of Ronald Reagan and his team laughing uproariously with the white Impact Bold-style meme text at the top that reads, “AND THEN HE SAID…”, followed by large text at the bottom that reads, “WE’LL PACKAGE EVERY EXTENSION FOR EVERY PLATFORM!”

Photo of Christoph, Devrim, and other long-time packagers laughing at me.

Or perhaps:

Photo of two German shepherds looking at a pink laptop and appearing to laugh hysterically, with the white Impact Bold-style meme text at the top that reads, “AND THEN HE SAID…”, followed by large text at the bottom that reads, “UPSTREAM MAINTAINERS WILL FIX BUILD FAILURES!”

Photo of Christoph and Devrim laughing at me.

I hardly blame them.

A CloudNativePG Side Quest

Gabriele Bartolini blogged the proposal to deploy extensions to CloudNativePG containers without violating the immutability of the container. The introduction of the extension_control_path GUC in Postgres 18 and the ImageVolume feature in Kubernetes 1.33 enable the pattern, likely to be introduced in CloudNativePG v1.27. Here’s a sample CloudNativePG cluster manifest with the proposed extension configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: postgresql-with-extensions
spec:
  instances: 1
  imageName: ghcr.io/cloudnative-pg/postgresql-trunk:18-devel
  postgresql:
    extensions:
      - name: vector
        image:
          reference: ghcr.io/cloudnative-pg/pgvector-18-testing
  storage:
    storageClass: standard
    size: 1Gi

The extensions object at lines 9-12 configures pgvector simply by referencing an OCI image that contains nothing but the files for the extension. To “install” the extension, the proposed patch triggers a rolling update, replicas first. For each instance, it takes the following steps:

  • Mounts each extension as a read-only ImageVolume under /extensions; in this example, /extensions/vector provides the complete contents of the image

  • Updates LD_LIBRARY_PATH to include the path to the lib directory of the each extension, e.g., /extensions/vector/lib.

  • Updates the extension_control_path and dynamic_library_path GUCs to point to the share and lib directories of each extension, in this example:

    extension_control_path = '$system:/extensions/vector/share'
    dynamic_library_path   = '$libdir:/extensions/vector/lib'
    

This works! Alas, the pod restart is absolutely necessary, whether or not any extension requires it,1, because:

  • Kubernetes resolves volume mounts, including ImageVolumes, at pod startup
  • The dynamic_library_path and extension_control_path GUCs require a Postgres restart
  • Each extension requires another path to be appended to both of these GUCs, as well as the LD_LIBRARY_PATH

Say we wanted to use five extensions. The extensions part of the manifest would look something like this:

extensions:
  - name: vector
    image:
      reference: ghcr.io/cloudnative-pg/pgvector-18-testing
  - name: semver
    image:
      reference: ghcr.io/example/semver:0.40.0
  - name: auto_explain
    image:
      reference: ghcr.io/example/auto_explain:18
  - name: bloom
    image:
      reference: ghcr.io/example/bloom:18
  - name: postgis
    image:
      reference: ghcr.io/example/postgis:18

To support this configuration, CNPG must configure the GUCs like so:

extension_control_path = '$system:/extensions/vector/share:/extensions/semver/share:/extensions/auto_explain/share:/extensions/bloom/share:/extensions/postgis/share'

dynamic_library_path   = '$libdir:/extensions/vector/lib:/extensions/semver/lib:/extensions/auto_explain/lib:/extensions/bloom/lib:/extensions/postgis/lib'

And also LD_LIBRARY_PATH:

LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/extensions/vector/lib:/extensions/semver/lib:/extensions/auto_explain/lib:/extensions/"

In other words, every additional extension requires another prefix to be appended to each of these configurations. Ideally we could use a single prefix for all extensions, avoiding the need to update these configs and therefore to restart Postgres. Setting aside the ImageVolume limitation2 for the moment, this pattern would require no rolling restarts and no GUC updates unless a newly-added extension requires pre-loading via shared_preload_libraries.

Getting there, however, requires a different extension file layout than PostgreSQL currently uses.

RFC: Extension Packaging and Lookup

Imagine this:

  • A single extension search path GUC
  • Each extension in its own eponymous directory
  • Pre-defined subdirectory names used inside each extension directory

The search path might look something like:

extension_search_path = '$system:/extensions:/usr/local/extensions'

Looking at one of these directories, /extensions, its contents would be extension directories:

❯ ls -1 extensions
auto_explain
bloom
postgis
semver
vector

And the contents of one these extension directories would be something like:

❯ tree extensions/semver
extensions/semver
├── doc
│   └── semver.md
├── lib
│   └── semver.so
├── semver.control
└── sql
    ├── semver--0.31.0--0.31.1.sql
    ├── semver--0.31.1--0.31.2.sql
    ├── semver--0.31.2--0.32.0.sql
    └── semver--0.5.0--0.10.0.sql

For this pattern, Postgres would look for the appropriately-named directory with a control file in each of the paths. To find the semver extension, for example, it would find /extensions/semver/semver.control.

All the other files for the extension would live in specifically-named subdirectories: doc for documentation files, lib for shared libraries, sql for SQL deployment files, plus bin, man, html, include, locale, and any other likely resources.

With all of the files required for an extension bundled into well-defined subdirectories of a single directory, it lends itself to the layout of the proposed binary distribution format. Couple it with OCI distribution and it becomes a natural fit for ImageVolume deployment: simply map each extension OCI image to a subdirectory of the desired search path and you’re done. The extensions object in the CNPG Cluster manifest remains unchanged, and CNPG no longer needs to manipulate any GUCs.

Some might recognize this proposal from a previous RFC post. It not only simplifies the CloudNativePG use cases, but because it houses all of the files for an extension in a single bundle, it also vastly simplifies installation on any system:

  1. Download the extension package
  2. Validate its signature & contents
  3. Unpack its contents into a directory named for the extension in the extension search path

Simple!

Fun With Dependencies

Many extensions depend on external libraries, and rely on the OS to find them. OS packagers follow the dependency patterns of their packaging systems: require the installation of other packages to satisfy the dependencies.

How could a pattern be generalized by the Trunk Packaging Format to work on all OSes? I see two potential approaches:

  1. List the dependencies as purls that the installing client translates to the appropriate OS packages it installs.
  2. Bundle dependencies in the Trunk package itself

Option 1 will work well for most use cases, but not immutable systems like CloudNativePG. Option 2 could work for such situations. But perhaps you noticed the omission of LD_LIBRARY_PATH manipulation in the packaging and lookup discussion above. Setting aside the multitude of reasons to avoid LD_LIBRARY_PATH3, how else could the OS find shared libraries needed by an extension?

Typically, one installs shared libraries in one of a few directories known to tools like ldconfig, which must run after each install to cache their locations. But one cannot rely on ldconfig in immutable environments, because the cache of course cannot be mutated.

We could, potentially, rely on rpath, a feature of modern dynamic linkers that reads a list of known paths from the header of a binary file. In fact, most modern OSes support $ORIGIN as an rpath value4 (or @loader_path on Darwin/macOS), which refers to the same directory in which the binary file appears. Imagine this pattern:

  • The Trunk package for an extension includes dependency libraries alongside the extension module
  • The module is compiled with rpath=$ORIGIN

To test this pattern, let’s install the Postgres 18 beta and try the pattern with the pguri extension. First, remove the $libdir/ prefix (as discussed previously) and patch the extension for Postgres 17+:

perl -i -pe 's{\$libdir/}{}' pguri/uri.control pguri/*.sql
perl -i -pe 's/^(PG_CPPFLAGS.+)/$1 -Wno-int-conversion/' pguri/Makefile

Then compile it with CFLAGS to set rpath and install it with a prefix parameter:

make CFLAGS='-Wl,-rpath,\$$ORIGIN'
make install prefix=/usr/local/postgresql

With the module installed, move the liburiparser shared library from OS packaging to the lib directory under the prefix, resulting in these contents:

❯ ls -1 /usr/local/postgresql/lib
liburiparser.so.1
liburiparser.so.1.0.30
uri.so

The chrpath utility shows that the extension module, uri.so, has its RUNPATH (the modern implementation of rparth) properly configured:

❯ chrpath /usr/local/postgresql/lib/uri.so 
uri.so: RUNPATH=$ORIGIN

Will the OS be able to find the dependency? Use ldd to find out:

❯ ldd /usr/local/postgresql/lib/uri.so 
	linux-vdso.so.1
	liburiparser.so.1 => /usr/local/postgresql/lib/liburiparser.so.1
	libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6
	/lib/ld-linux-aarch64.so.1

The second line of output shows that it does in fact find liburiparser.so.1 where we put it. So far so good. Just need to tell the GUCs where to find them and restart Postgres:

extension_control_path = '$system:/usr/local/postgresql/share'
dynamic_library_path   = '$libdir:/usr/local/postgresql/lib'

And then it works!

❯ psql -c "CREATE EXTENSION uri"
CREATE EXTENSION
❯ psql -c "SELECT 'https://example.com/'::uri"
         uri          
----------------------
 https://example.com/

Success! So we can adopt this pattern, yes?

A Wrinkle

Well, maybe. Try it with a second extension, http, once again building it with rpath=$ORIGIN and installing it in the custom lib directory:

perl -i -pe 's{$libdir/}{}g' *.control
make CFLAGS='-Wl,-rpath,\$$ORIGIN'
make install prefix=/usr/local/postgresql

Make sure it took:

❯ chrpath /usr/local/postgresql/lib/http.so 
http.so: RUNPATH=$ORIGIN

Now use ldd to see what shared libraries it needs:

❯ ldd /usr/local/postgresql/lib/http.so
	linux-vdso.so.1 
	libcurl.so.4 => not found
	libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6

Naturally it needs libcurl; let’s copy it from another system and try again:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
❯ scp dev:libcurl.so.4 /usr/local/postgresql/lib/
❯ ldd /usr/local/postgresql/lib/http.so
	linux-vdso.so.1
	libcurl.so.4 => /usr/local/postgresql/lib/libcurl.so.4
	libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6
	/lib/ld-linux-aarch64.so.1
	libnghttp2.so.14 => not found
	libidn2.so.0 => /lib/aarch64-linux-gnu/libidn2.so.0
	librtmp.so.1 => not found
	libssh.so.4 => not found
	libpsl.so.5 => not found
	libssl.so.3 => /lib/aarch64-linux-gnu/libssl.so.3
	libcrypto.so.3 => /lib/aarch64-linux-gnu/libcrypto.so.3
	libgssapi_krb5.so.2 => /lib/aarch64-linux-gnu/libgssapi_krb5.so.2
	libldap.so.2 => not found
	liblber.so.2 => not found
	libzstd.so.1 => /lib/aarch64-linux-gnu/libzstd.so.1
	libbrotlidec.so.1 => not found
	libz.so.1 => /lib/aarch64-linux-gnu/libz.so.1

Line 4 shows it found libcurl.so.4 where we put it, but the rest of the output lists a bunch of new dependencies that need to be satisfied. These did not appear before because the http.so module doesn’t depend on them; the libcurl.so library does. Let’s add libnghttp2 and try again:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
❯ scp dev:libnghttp2.so.14 /usr/local/postgresql/lib/
❯ ldd /usr/local/postgresql/lib/http.so
	linux-vdso.so.1
	libcurl.so.4 => /usr/local/postgresql/lib/libcurl.so.4
	libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6
	/lib/ld-linux-aarch64.so.1
	libnghttp2.so.14 => not found
	libidn2.so.0 => /lib/aarch64-linux-gnu/libidn2.so.0
	librtmp.so.1 => not found
	libssh.so.4 => not found
	libpsl.so.5 => not found
	libssl.so.3 => /lib/aarch64-linux-gnu/libssl.so.3
	libcrypto.so.3 => /lib/aarch64-linux-gnu/libcrypto.so.3
	libgssapi_krb5.so.2 => /lib/aarch64-linux-gnu/libgssapi_krb5.so.2
	libldap.so.2 => not found
	liblber.so.2 => not found
	libzstd.so.1 => /lib/aarch64-linux-gnu/libzstd.so.1
	libbrotlidec.so.1 => not found
	libz.so.1 => /lib/aarch64-linux-gnu/libz.so.1

Sadly, as line 7 shows, it still can’t find libnghttp2.so.

It turns out that rpath works only for immediate dependencies. To solve this problem, liburl and all other shared libraries must also be compiled with rpath=$ORIGIN — which means we can’t simply copy those libraries from OS packages5. In th meantime, only deirect dependencies could be bundled with an extension.

Project Status

The vision of accessible, easy-install extensions everywhere remains intact. I’m close to completing a first release of the PGXN v2 build SDK with support for meta spec v1 and v2, PGXS, and pgrx extensions. I expect the first deliverable to be a command-line client to complement and eventuallly replace the original CLI. It will be put to work building all the extensions currently distributed on PGXN, which will surface new issues and patterns that inform the development and completion of the v2 meta spec.

In the future, I’d also like to:

  • Finish working out Trunk format and dependency patterns
  • Develop and submit the prroposed extension_search_path patch
  • Submit ImageVolume feedback to Kubernetes to allow runtime mounting
  • Start building and distributing OCI Trunk packages
  • Make the pattern available for distributed registries, so anyone can build their own Trunk releases!
  • Hack fully-dynamic extension loading into CloudNativePG

Let’s Talk

I recognize the ambition here, but feel equal to it. Perhaps not every bit will work out, but I firmly believe in setting a clear vision and executing toward it while pragmatically revisiting and revising it as experience warrants.

If you’d like to contribute to the project or employ me to continue working on it, let’s talk! Hit me up via one of the services listed on the about page.


  1. The feature does not yet support pre-loading shared libraries. Presumably a flag will be introduced to add the extension to shared_preload_libraries↩︎

  2. Though we should certainly request the ability to add new ImageVolume mounts without a restart. We can’t be the only ones thinking about kind of feature, right? ↩︎

  3. In general, one should avoid LD_LIBRARY_PATH for variety of reasons, not least of which its bluntness. For various security reasons, macOS ignores it unless sip is disabled, and SELinux prevents its propagation to new processes. ↩︎

  4. Although not Windows, alas. ↩︎

  5. Unless packagers could be pursuaded to build all libraries with rpath=$ORIGIN, which seems like a tall order. ↩︎

Auto-Release PostgreSQL Extensions on PGXN

I last wrote about auto-releasing PostgreSQL extensions on PGXN back in 2020, but I thought it worthwhile, following my Postgres Extensions Day talk last week, to return again to the basics. With the goal to get as many extensions distributed on PGXN as possible, this post provides step-by-step instructions to help the author of any extension or Postgres utility to quickly and easily publish every release.

TL;DR

  1. Create a PGXN Manager account
  2. Add a META.json file to your project
  3. Add a pgxn-tools powered CI/CD pipeline to publish on tag push
  4. Fully-document your extensions

Release your extensions on PGXN

PGXN aims to become the defacto source for all open-source PostgreSQL extensions and tools, in order to help users quickly find and learn how to use extensions to meet their needs. Currently, PGXN distributes source releases for around 400 extensions (stats on the about page), a fraction of the ca. 1200 known extensions. Anyone looking for an extension might exist to solve some problem must rely on search engines to find potential solutions between PGXN, GitHub, GitLab, blogs, social media posts, and more. Without a single trusted source for extensions, and with the proliferation of AI Slop in search engine results, finding extensions aside from a few well-known solutions proves a challenge.

By publishing releases and full documentation — all fully indexed by its search index — PGXN aims to be that trusted source. Extension authors provide all the documentation, which PGXN formats for legibility and linking. See, for example, the pgvector docs.

If you want to make it easier for users to find your extensions, to read your documentation — not to mention provide sources for binary packaging systems — publish every release on PGXN.

Here’s how.

Create an Account

Step one: create a PGXN Manager account. The Email, Nickname, and Why fields are required. The form asks “why” as a simple filter for bad actors. Write a sentence describing what you’d like to release — ideally with a link to the source repository — and submit. We’ll get the account approved forthwith, which will send a confirmation email to your address. Follow the link in the email and you’ll be good to go.

Anatomy of a Distribution

A PostgreSQL extension source tree generally looks something like this (taken from the pair repository):

pair
├── Changes
├── doc
│   └── pair.md
├── Makefile
├── META.json
├── pair.control
├── README.md
├── sql
│   ├── pair--unpackaged--0.1.2.sql
│   └── pair.sql
└── test
    ├── expected
    │   └── base.out
    └── sql
        └── base.sql

Extension authors will recognize the standard PGXS (or pgrx) source distribution files; only META.json file needs explaining. The META.json file is, frankly, the only file that PGXN requires in a release. It contains the metadata to describe the release, following the PGXN Meta Spec. This example contains only the required fields:

{
  "name": "pair",
  "version": "0.1.0",
  "abstract": "A key/value pair data type",
  "maintainer": "David E. Wheeler <david@justatheory.com>",
  "license": "postgresql",
  "provides": {
    "pair": {
      "file": "sql/pair.sql",
      "version": "0.1.0"
    }
  },
  "meta-spec": {
    "version": "1.0.0"
  }
}

Presumably these fields contain no surprises, but a couple of details:

  • It starts with the name of the distribution, pair, and the release version, 0.1.0.
  • The abstract provides a brief description of the extension, while the maintainer contains contact information.
  • The license stipulates the distribution license, of course, usually one of a few known, but may be customized.
  • The provides object lists the extensions or tools provided, each named by an object key that points to details about the extension, including main file, version, and potentially an abstract and documentation file.
  • The meta-spec object identifies the meta spec version used for the META.json itself.

Release It!

This file with these fields is all you need to make a release. Assuming Git, package up the extension source files like so (replacing your extension name and version as appropriate).

git archive --format zip --prefix=pair-0.1.0 -o pair-0.1.0.zip HEAD

Then navigate to the release page, authenticate, and upload the resulting .zip file.

Screenshot with a box labeled “Upload a Distribution Archive”. It contains an “Archive” label in front of a button labeled “Choose File”. Next to it is a zip file icon and  the text “pair-0.1.0.zip”. Below the box is another button labeled “Release It!”

And that’s it! Your release will appear on pgxn.org and on Mastodon within five minutes.

Let’s Automate it!

All those steps would be a pain in the ass to follow for every release. Let’s automate it using pgxn-tools! This OCI image contains the tools necessary to package and upload an extension release to PGXN. Ideally, use a CI/CD pipeline like a GitHub Workflow to publish a release on every version tag.

Set up Secrets

pgxn-tools uses your PGXN credentials to publish releases. To keep them safe, use the secrets feature of your preferred CI/CD tool. This figure shows the “Secrets and variables” configuration for a GitHub repository, with two repository secrets: PGXN_USERNAME and PGXN_PASSWORD:

Screenshot of GitHub Secrets configuration featuring two repository secrets: PGXN_USERNAME and PGXN_PASSWORD.

Create a Pipeline

Use those secrets and pgxn-tools in CI/CD pipeline. Here, for example, is a minimal GitHub workflow to publish a release for every SemVer tag:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
on:
  push:
    tags: ['v[0-9]+.[0-9]+.[0-9]+']
jobs:
  release:
    name: Release on PGXN
    runs-on: ubuntu-latest
    container: pgxn/pgxn-tools
    env:
      PGXN_USERNAME: ${{ secrets.PGXN_USERNAME }}
      PGXN_PASSWORD: ${{ secrets.PGXN_PASSWORD }}
    steps:
    - name: Check out the repo
      uses: actions/checkout@v4
    - name: Bundle the Release
      run: pgxn-bundle
    - name: Release on PGXN
      run: pgxn-release

Details:

  • Line 3 configures the workflow to run on a SemVer tag push, typically used to denote a release.
  • Line 8 configures the workflow job to run inside a pgxn-tools container.
  • Lines 10-11 set environment variables with the credentials from the secrets.
  • Line 16 bundles the release using either git archive or zip.
  • Line 18 publishes the release on PGXN.

Now publishing a new release is as simple as pushing a SemVer tag, like so:

git tag v0.1.0 -sm 'Tag v0.1.0'
git push --follow-tags

That’s it! The workflow will automatically publish the extension for every release, ensuring the latest and greatest always make it to PGXN where users and packagers will find them.

The pgxn-tools image also provides tools to easily test a PGXS or pgrx extension on supported PostgreSQL versions (going back as far as 8.2), also super useful in a CI/CD pipeline. See Test Postgres Extensions With GitHub Actions for instructions. Depending on your CI/CD tool of choice, you might take additional steps, such as publishing a release on GitHub, as previously described.

Optimizing for PGXN

But let’s dig deeper into how to optimize extensions for maximum discoverability and user visibility on PGXN.

Add More Metadata

The META.json file supports many more fields that PGXN indexes and references. These improve the chances users will find what they’re looking for. This detailed example demonstrates how a PostGIS META.json file might start to provide additional metadata:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
{
   "name": "postgis",
   "abstract": "Geographic Information Systems Extensions to PostgreSQL",
   "description": "This distribution contains a module which implements GIS simple features, ties the features to R-tree indexing, and provides many spatial functions for accessing and analyzing geographic data.",
   "version": "3.5.0",
   "maintainer": [
      "Paul Ramsey <pramsey@example.com>",
      "Sandro Santilli <sandro@examle.net>"
   ],
   "license": [ "gpl_2", "gpl_3" ],
   "provides": {
      "postgis": {
         "abstract": "PostGIS geography spatial types and functions",
         "file": "extensions/postgis/postgis.control",
         "docfile": "extensions/postgis/doc/postgis.md",
         "version": "3.5.0"
      },
      "address_standardizer": {
         "abstract": "Used to parse an address into constituent elements. Generally used to support geocoding address normalization step.",
         "file": "extensions/address_standardizer/address_standardizer.control",
         "docfile": "extensions/address_standardizer/README.address_standardizer",
         "version": "3.5.0"
      }
   },
   "prereqs": {
      "runtime": {
         "requires": {
            "PostgreSQL": "12.0.0",
            "plpgsql": 0
         }
      },
      "test": {
         "recommends": {
            "pgTAP": 0
         }
      }
   },
   "resources": {
      "bugtracker": {
         "web": "https://trac.osgeo.org/postgis/"
      },
      "repository": {
         "url": "https://git.osgeo.org/gitea/postgis/postgis.git",
         "web": "https://git.osgeo.org/gitea/postgis/postgis",
         "type": "git"
      }
   },
   "generated_by": "David E. Wheeler",
   "meta-spec": {
      "version": "1.0.0",
      "url": "https://pgxn.org/meta/spec.txt"
   },
   "tags": [
      "gis",
      "spatial",
      "geometry",
      "raster",
      "geography",
      "location"
   ]
}
  • Line 4 contains a longer description of the distribution.
  • Lines 6-9 show how to list multiple maintainers as an array.
  • Line 10 demonstrates support for an array of licenses.
  • Lines 11-24 list multiple extensions included in the distribution, with abstracts and documentation files for each.
  • Lines 25-37 identify dependencies for various phases of the distribution lifecycle, including configure, build, test, runtime, and develop. Each contains an object identifying PostgreSQL or extension dependencies.
  • Lines 38-47 lists resources for the distribution, including issue tracking and source code repository.
  • Lines 53-60 contains an array of tags, an arbitrary list of keywords for a distribution used both in the search index and the PGXN tag cloud.

Admittedly the PGXN Meta Spec provides a great deal of information. Perhaps the simplest way to manage it is to copy an existing META.json from another project (or above) and edit it. In general, only the version fields require updating for each release.

Write Killer Docs

The most successful extensions provide ample descriptive and reference documentation, as well as examples. Most extensions feature a README, of course, which contains basic information, build and install instructions, and contact info. But as the pair tree, illustrates, PGXN also supports extension-specific documentation in a variety of formats, including:

Some examples:

PGXN will also index and format additional documentation files in any of the above formats. See, for example, all the files formatted for orafce.

Exclude Files from Release

Use gitattributes to exclude files from the release. For example, distributions don’t generally include .gitignore or the contents of the .github directory. Exclude them from the archive created by git archive by assigning export-ignore to each path to exclude in the .gitattributes file, like so:

.gitignore export-ignore
.gitattributes export-ignore
.github export-ignore

What’s It All For?

PGXN aims to be the trusted system of record for open-source PostgreSQL extensions. Of course that requires that it contain all (or nearly all) of said extensions. Hence this post.

Please help make it so by adding your extensions, both to help users find the extensions they need, and to improve the discoverability of your extensions. Over time, we aim to feed downstream extension distribution systems, such as Yum, APT, CloudNativePG, OCI, and more.

Let’s make extensions available everywhere to everyone.

Mini Summit 5 Transcript: Improving the PostgreSQL Extensions Experience in Kubernetes with CloudNativePG

Orange card with large black text reading “Extension Management in CNPG”. Smaller text below reads “Gabriele Bartolini (EDB)” and that is the date, “05.07.2025”.

The final PostgresSQL Extension Mini-Summit took place on May 7. Gabriele Bartolini gave an overview of PostgreSQL extension management in CloudNativePG (CNPG). This talk brings together the topics of several previous Mini-Summits — notably Peter Eisentraut on implementing an extension search path — to look at the limitations of extension support in CloudNativePG and the possibilities enabled by the extension search path feature and the Kubernetes 1.33 ImageVolume feature. Check it out:

Or read on for the full transcript with thanks to Floor Drees for putting it together.

Introduction

Floor Drees.

On May 7 we hosted the last of five (5) virtual Mini-Summits that lead up to the big one at the Postgres Development Conference (PGConf.Dev), taking place next week, in Montreal, Canada. Gabriele Bartolini, CloudNativePG maintainer, PostgreSQL Contributor, and VP Cloud Native at EDB, joined to talk about improving the Postgres extensions experience in Kubernetes with CloudNativePG.

The organizers:

The stream and the closed captions available for the recording are supported by PGConf.dev and their gold level sponsors, Google, AWS, Huawei, Microsoft, and EDB.

Improving the Postgres extensions experience in Kubernetes with CloudNativePG

Gabriele Bartolini.

Hi everyone. Thanks for this opportunity, and thank you Floor and David for inviting me today.

I normally start every presentation with a question, and this is actually the question that has been hitting me and the other maintainers of CloudNativePG — and some are in this call — from the first day. We know that extensions are important in Kubernetes, in Postgres, and we’ve always been asking how can we deploy extensions, without breaking the immutability of the container.

So today I will be telling basically our story, and hopefully providing good insights in the future about how with CloudNativePG we are trying to improve the experience of Postgres extensions when running databases, including issues.

I’ve been using Postgres for 25 years. I’m one of the co-founders of 2ndQuadrant, which was bought by a EDB in 2020. And because of my contributions, I’ve been recognized as a Postgres contributor and I’m really grateful for that. And I’m also “Data on Kubernetes ambassador”; my role is to promote the usage of stateful workloads in Kubernetes. I’m also DevOps evangelist. I always say this: DevOps is the reason why I encountered Kubernetes, and it will also be the reason why I move away one day from Kubernetes. It’s about culture and I’ll explain this later.

In the past I’ve been working with Barman; I’m one of the creators of Barman. And since 2022, I’m one of the maintainers of CloudNativePG. I want to thank my company, EDB, for being the major contributor in Postgres history in terms of source code. And right now we are also the creators of CloudNativePG. And as we’ll see, the company donated the IP to the CNCF. So it’s something that is quite rare, and I’m really grateful for that.

What I plan to cover tonight is first, set the context and talk about immutable application containers, which have been kind of a dogma for us from day one. Then, how we are handling right now extensions in Kubernetes with CNPG. This is quite similar to the way other operators deal with it. Then the future and key takeaways.

First, we’re talking about Kubernetes. If you’re not familiar, it’s an orchestration system for containers. It’s not just an executor of containers, but it’s a complex system that also manages infrastructure. When it manages infrastructure, it also manages cloud native applications that are also called workloads. When we’re thinking about Postgres in Kubernetes, the database is a workload like the others. That, I think, is the most important mind shift among Postres users that I have faced myself, that I’ve always treated Postgres differently from the rest. Here in Kubernetes is it’s just another workload.

Then of course, it’s not like any other workload, and that’s where operators come into play, and I think the work that we are doing even tonight is in the direction to improve how databases is run in Kubernetes in general, and for everyone.

It was open sourced in 2014, and, it’s owned by the CNCF, and it’s actually the first project that graduated, and graduated is the most advanced stage in the graduation process of the CNCF, which starts with sandbox, then incubation and then graduation.

CloudNativePG is an operator for Postgres. It’s production-ready — what we say is level five. Level five is kind of an utopic, and unbounded level, the highest one as defined by the operator development framework. It’s used by all these players including Tembo, IBM Cloud Paks, Google Cloud, Azure, Akamai, and so on. CNPG is a CNCF project since January. It’s distributed under Apache License 2.0 and the IP — the Intellectual Property — is owned by the community and protected by the CNCF. It therefore is a vendor neutral and openly governed project. This is kind of a guarantee that it will always be free. This is also, in my opinion, a differentiation between CloudNativePG and the rest.

The project was originally created by EDB, but specifically at that time, by 2ndQuadrant. And, as I always like to recall, it was Simon Riggs that put me in charge of the initiative. I’ll always be grateful to Simon, not only for that, but for everything he has done for me and the team.

CNPG can be installed in several ways. As you can see, it’s very popular in terms of stars. There’s more than 4,000 commits. And what’s impressive is the number of downloads in three years, which is 78 million, which means that it’s used the way we wanted it to be used: with CICD pipelines.

This is the CNCF landscape; these are the CNCF projects. As you can see, there are only five projects in the CNCF in the database area, and CloudNativePG is the only one for Postgres. Our aim for 2025 and 2026 is to become incubating. If you’re using CNPG and you want to help with the process, get in touch with me and Floor.

I think to understand again, what, why we’ve done all this process, that led to the patch that, you’ve seen in Postgres 18, it’s important to understand what cloud native has meant to us since we started in 2019. We’ve got our own definition, but I think it still applies. For us it’s three things, Cloud native. It’s people that work following DevOps culture. For example, there are some capabilities that come from DevOps that apply to the cloud native world. I selected some of them like in user infrastructure, infrastructure abstraction, version control. These three form the infrastructure-as-code principle, together with the declarative configuration.

A shift left on security. You’ll see with CloudNativePG, we rarely mention security because it’s pretty much everywhere. It’s part of the process. Then continuous delivery.

The second item is immutable application containers, which kind of led the immutable way of thinking about extensions. And then the third one is that these application containers must be orchestrated via an infrastructure-as-code by an orchestrator, and the standard right now is Kubernetes.

For us it’s these three things, and without any of them, you cannot achieve cloud native.

So what are these immutable application containers? To explain immutability I’d like to talk about immutable infrastructure, which is probably what the majority of people that have historically worked with Postgres are used to. I’m primarily referring to traditional environments like VMs and bare metal where the main ways we deploy Postgres is through packages, maybe even managed by configuration managers, but still, packages are the main artifacts. The infrastructure is seen as a long-term kind of project. Changes happen over time and are incremental updates, updates on an existing infrastructure. So if you want to know the history of the infrastructure over time, you need to check all the changes that have applied. In case of failure of a system, systems are healed. So that’s the pets concept that comes from DevOps.

On the other hand, immutable infrastructure relies on OCI container images. OCI is a standard, the Open Container Initiative and it’s part of the Linux Foundation as well. Immutable infrastructure is founded on continuous delivery, which is the foundation of GitOps practices. In an immutable infrastructure, releasing a new version of an application is not updating the system’s application, it is building a new image and publishing it on a public registry and then deploying it. Changes in the system happen in an atomic way: the new version of a container is pulled from the registry and the existing image is almost instantaneously replaced by the new one. This is true for stateless applications and we’ll see, in the case of stateful applications like Postgres, is not that instantaneous because we need to perform a switchover or restart — in any case, generate a downtime.

When it comes to Kubernetes, the choice was kind of obvious to go towards that immutable infrastructure. So no incremental updates, and in the case of stateful workloads where you cannot change the content of the container, you can use data volumes or persistent volumes. These containers are not changed. If you want to change even a single file or a binary in a container image, you need to create a new one. This is very important for security and change management policies in general.

But what I really like about this way of managing our infrastructure is that, at any time, Kubernetes knows exactly what software is running in your infrastructure. All of this is versioned in an SCM, like Git or whatever. This is something that in the mutable world is less easy to obtain. Again, for security, this is the foundational thing because this is how you can control CVEs, the vulnerabilities in your system. This is a very basic representation of how you build, contain — let’s say the lifecycle of a container image. You create a Dockerfile, you put it in Git, for example, then there’s an action or a pipeline that creates the container image, maybe even run some tests and then pushes it to the container registry.

I walked you through the concepts of mutable and immutable containers, what are, these immutable application containers? If you go back and read what we were rising before CloudNativePG was famous or was even used, we were always putting in immutable application containers as one of the principles we could not lose.

For an immutable application container, it means that there’s only a single application running; that’s why it’s called “application”. If you have been using Docker, you are more familiar with system containers: you run a Debian system, you just connect and then you start treating it like a VM. Application containers are not like that. And then they are immutable — read-only — so you cannot even make any change or perform updates of packages. But in CloudNativePG, because we are managing databases, we need to put the database files in separate persistent volumes. Persistent volumes are standard resources provided by Kubernetes. This is where we put PGDATA and, if you want, a separate volume for WAL files with different storage specifications and even an optional number of table spaces.

CloudNativePG orchestrates what we call “operand images”. These are very important to understand. They contain the Postgres binaries and they’re orchestrated via what we call the “instance manager”. The instance manager is just the process that runs and controlled Postgres; I’ss the PID 1 — or the entry point — of the container.

There’s no other, like SSHD or other, other applications work. There’s just the instance manager that then controls everything else. And this is the project of the operating images. This is one open source project, and every week we rebuild the Postgres containers. We recently made some changes to the flavors of these images and I’ll talk about it shortly.

We mentioned the database, we mentioned the binaries, but what about extensions? This is the problem. Postgres extensions in Kubernetes with CloudNativePG is the next section, and it’s kind of a drama. I’m not hiding this. The way we are managing extensions in Kubernetes right now, in my opinion, is not enough. It works, but it’s got several limitations — mostly limitations in terms of usage.

For example, we cannot place them in the data files or in persistent volumes because these volumes are not read-only in any way. In any case, they cannot be strictly immutable. So we discarded this option to have persistent volume where you could kind of deploy extensions and maybe you can even download on the fly or use the package manager to download them or these kind of operations. We discarded this from the start and we embraced the operand image solution. Essentially what we did was placing these extensions in the same operand image that contains the Postgres binaries. This is a typical approach of also the other operators. If you think about also Zalando we call it “the Spilo way”. Spilo contained all the software that would run with the Zalando operator.

Our approach was a bit different, in that we wanted lighter images, so we created a few flavors of images, and also selected some extensions that we placed in the images. But in general, we recommended to build custom images. We provided instructions and we’ve also provided the requirements to build container images. But as you can see, the complexity of the operational layer is quite high, it’s not reasonable to ask any user or any customer to build their own images.

This is how they look now, although this is changing as I was saying:

A stack of boxes with “Debian base image” at the top, then “PostgreSQL”, then “Barman Cloud”, and finally  three “Extension” boxes at the bottom.

You’ve got a base image, for example, the Debian base image. You deploy the Postgres binaries. Then — even right now though it’s changing — CloudNativePG requires Barman Cloud to be installed. And then we install the extensions that we think are needed. For example, I think we distribute pgAudit, if I recall correctly, pgvector and pg_failover_slots. Every layer you add, of course, the image is heavier and we still rely on packages for most extensions.

The problem is, you’ve got a cluster that is already running and you want, for example, to test an extension that’s just come out, or you want to deploy it in production. If that extension is not part of the images that we build, you have to build your own image. Because of the possible combinations of extensions that exist, it’s impossible to build all of these combinations. You could build, for example, a system that allows you to select what extensions you want and then build the image, but in our way of thinking, this was not the right approach. And then you’ve got system dependencies and, if an extension brings a vulnerability that affects the whole image and requires more updates — not just of the cluster, but also of the builds of the image.

We wanted to do something else, but we immediately faced some limitations of the technologies. One was on Postgres, the other one was on Kubernetes. In Postgres, extensions need to be placed in a single folder. It’s not possible to define multiple locations, but thanks to the work that Peter and this team have done, now we’ve got extension_control_path in version 18.

Kubernetes could not allow until, 10 days ago, to mount OCI artifacts as read-only volumes. There’s a new feature that is now part of Kubernetes 1.33 that allows us to do it.

This is the patch that I was talking about, by Peter Eisentraut. I’m really happy that CloudNativePG is mentioned as one of the use cases. And there’s also mentioned for the work that, me, David, and Marco and, primarily Marco and Niccolò from CloudNativePG have done.

This is the patch that introduced VolumeSource in Kubernetes 1.33.

The idea is that with Postgres 18 now we can set in the configuration where we can look up for extensions in the file system. And then, if there are libraries, we can also use the existing dynamic_library_path GUC.

So, you remember, this is where we come from [image above]; the good thing is we have the opportunity to build Postgres images that are minimal, that only contain Postgres.

Three stacks of boxes. On the left, “Debian base image” on top of “PostgreSQL”. On the right, “Debian base image” on top of “Barman Cloud”. On the lower right, a single box for an extension.

Instead of recreating them every week — because it’s very likely that something has some dependency, has a CVE, and so recreate them for everyone, forcing everyone to update their Postgres systems — we can now release them maybe once a month, and pretty much follow the Postgres cadence patch releases, and maybe if there are CVEs it’s released more frequently.

The other good thing is that now we are working to remove the dependency on Barman Cloud for CloudNativePG. CloudNativePG has a new plugin interface and with 1.26 with — which is expected in the next weeks — we are suggesting people start moving new workloads to the Barman Cloud plugin solution. What happens is that Barman Cloud will be in that sidecar image. So it will be distributed separately, and so its lifecycle is independent from the rest. But the biggest advantage is that any extension in Postgres can be distributed — right now we’ve got packages — The idea is that they are distributed also as images.

If we start thinking about this approach, if I write an extension for Postgres, until now I’ve been building only packages for Debian or for RPM systems. If I start thinking about also building container images, they could be immediately used by the new way of CloudNativePG to manage extensions. That’s my ultimate goal, let’s put it that way.

This is how things will change at run time without breaking immutability.

A box labeled “PostgreSQL Pod” with four separate boxes inside, labeled “Container Postgres”, “Sidecar Barman Cloud”, “Volume Extension 1”, and “Volume Extension 2”.

There will be no more need to think about all the possible combinations of extensions. There will be the Postgres pod that runs, for example, a primary or standby, that will have the container for Postgres. If you’re using Barman Cloud, the sidecar container managed by the plugin with Barman Cloud. And then, for every extension you have, you will have a different image volume that is read-only, very light, only containing the files distributed in the container image of the extension, and that’s all.

Once you’ve got these, we can then coordinate the settings for external extension_control_path and dynamic_library_path. What we did was, starting a fail fast pilot project within EDB to test the work that Peter was doing on the extension_control_path. For that we used the Postgres Trunk Containers project, which is a very interesting project that we have at CloudNativePG. Every day it rebuilds the latest snapshot of the master branch of Postgres so that we are able to catch, at an early stage, problems with the new version of Postgres in CloudNativePG. But there’s also an action that builds container images for a specific, for example, Commitfest patch. So we use that.

Niccolò wrote a pilot patch, an exploratory patch, for the operator to define the extensions stanza inside the cluster resource. He also built some bare container images for a few extensions. We make sure to include a very simple one and the most complex one, which is PostGIS. This is the patch that — it’s still a draft — and the idea is to have it in the next version, 1.27 for CloudNativePG. This is how it works:

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: postgresql-with-extensions
spec:
  instances: 1
  imageName: ghcr.io/cloudnative-pg/postgresql-trunk:18-devel
  postgresql:
    extensions:
      - name: pgvector
        image:
          reference: ghcr.io/cloudnative-pg/pgvector-18-testing:latest
  storage:
    storageClass: standard
    size: 1Gi

We have the extensions section in the cluster definition. We name the extension. Theoretically we could also define the version and we point to the image. What’s missing in this pilot patch is support for image catalogs, but that’s something else that we can worry about later.

What happens under the hood is that when you update, or when you add a new extension in the cluster definition, a rolling update is initiated. So there’s this short downtime, but the container image is loaded in the replicas first, and then in the primary. n image volume is mounted for each extension in, let’s say, /extensions/$name_of_extension folder and CNPG updates, these two parameters. It’s quite clean, quite neat. It works, but most of the work needs to happen here. So that’s been my call, I mean to call container images as a first class artifacts. If these changes, we have a new way to distribute images.

Just to approach the conclusion, if you want to know more about the whole story, I wrote this blog article that recaps everything, and the key takeaway for me — and then we go more on the patch if you want to, and also address the questions. But what is important for me? Being in the Postgres community for a long time, I think this is a good way, a good moment for us to challenge the status quo of the extension distribution ecosystem.

I think we have an opportunity now to define a standard, which, I just want to be clear, I’m focusing myself primarily on CNPG, but this is in general, even for other operators. I’m sure that this will benefit everyone and overall it will reduce the waste that we collectively create when distributing these extensions in Kubernetes. If this becomes a standard way to distribute extensions, the benefits will be much better operational work for everyone, primarily also easier testing and validation of extensions. I mean, right now, if you see an extension, ideally that extension — and it’s very easy to build — if you’re in GitHub, to build the container images. GitHub, for example, already provides the whole infrastructure for you to easily build container images.

So if we find a standard way to define a GitHub action to build Postgres extensions, I think, if you’re a developer of an extension, you can just use it and then you find a registry in your project directly that continuously publishes or periodically publishes this extension. Any user can just reference that image URL and then without having to build images, they’re just one rolling update away from testing a patch, testing also the upgrade paths.

I think there are some unknown unknowns that kind of scare me, in general, about upgrades, upgrades of extensions. This is, in my opinion, one of the biggest issues. It’s not that they’re not solved, but they require more attention and more testing if you’re using them in an immutable world. All of these will, in my opinion, will be much, much better with the approach we’ve proposed. Images will be lighter, and the lighter image is also safer and more secure, so less prone to have CVEs,lLess prone to require frequent updates, and also they reduce the usage of bandwidth, for an organization in general. What I was saying before, any extension project can be fully independent, have their own way to build images and publish them.

One last point. I keep hearing many signs, that all of the stuff that we are proposing right now seem like a kind of a limitation of Kubernetes. The way I see it, in my view, that it’s not actually a limitation, it’s that these problems have never been addressed before. The biggest mistake we can do is focus on the specific problem of managing extensions without analyzing the benefits that the entire stack brings to an organization. Kubernetes brings a lot of benefits in terms of security, velocity, change management and, operations that any organization must consider right now. Any Postgres DBA, any Postgres user, my advice is, if you haven’t done it yet, start taking Kubernetes, seriously.

Discussion

Floor: I do think that David, you wanted to talk maybe a little bit about the mutable volume pattern?

David: Well, if people are interested, in your early slide where you were looking at alternatives, one you were thinking of was putting extensions on a mutable volume and you decided not to do that. But at Tembo we did do that and I did a bunch of work trying to improve it and try to minimize image size and all that in the last couple months. Tembo Cloud is shutting down now, so I had to stop before I finished it, but I made quite a bit of progress. I’m happy to kind of talk through the ideas there. But I think that this approach is a better long term solution, fundamentally.

Gabriele: I would like if Marco and Niccolò, if you want to talk about the actual work you’ve done. Meanwhile, Peter asks, “why does an installation of an extension require a small downtime?” The reason is that at the moment, the image volume patch, if you add a new image volume, it requires the pod to restart. Nico or Marco, Jonathan, if you want to correct me on that.

Nico or Marco or Jonathan: It provides a rolling update of the cluster right now.

Gabriele: So that’s the reason. That’s the only drawback, but the benefits in my opinion, are…

David: My understanding is that, to add a new extension, it’s mounted it in a different place. And because every single extension is its own mount, you have to add it to both those GUCs. And at least one of them requires a restart.

Gabriele: But then for example, we’ve had this conversation at EDB for example, we’re planning to have flavors of predefined extensions. For example, you can choose a flavor and we distribute those extensions. For example, I dunno, for AI we place some AI kind of extensions in the same image, so it would be different.

But otherwise I’m considering the most extreme case of one extension, one container image, which in my opinion, for the open source world is the way that hopefully will happen. Because this way, think about that – I haven’t mentioned this — if I write an extension, I can then build the image and then run automated tests using Kubernetes to assess my extension on GitHub. If those tests fail, my commit will never be merged on main. This is trunk development, continuous delivery. This is, in my opinion, a far better way of delivering and developing software. This is, again, the reason why we ended up in Kubernetes. It’s not because it’s a technology we like, it’s a toy or so, it’s because it solves bigger problems than database problems.

Even when we talk about databases, there’s still work that needs to be done, needs to be improved. I’m really happy that we have more people that know Postgres nowadays that are joining CloudNativePG, and are elevating the discussions more and more on the database level. Because before it was primarily on Kubernetes level, but now we see people that know Postgres better than me get in CloudNativePG and propose new ideas, which is great. Which is the way it needs to be, in my opinion.

But I remember, Tembo approached us because we actually talked a lot with them. Jonathan, Marco, I’m sure that you recall, when they were evaluating different operators and they chose CloudNativePG. I remember we had these discussions where they asked us to break immutability and we said, “no way”. That’s why I think Tembo had to do the solution you described, because we didn’t want to do it upstream.

I think, to be honest, and to be fair, if image volumes were not added, we would’ve probably gone down that path, because this way of managing extensions, as I was saying, is not scalable, the current one. Because we want to always improve, I think that the approach we need to be critical on what we do. So, I don’t know, Niccolò, Marco, I would like you to, if you want, explain briefly.

[A bit of chatter, opened this Dockerfile.]

FROM ghcr.io/cloudnative-pg/postgresql-trunk:18-devel AS builder

USER 0

COPY . /tmp/pgvector

RUN set -eux; \
	mkdir -p /opt/extension && \
	apt-get update && \
	apt-get install -y --no-install-recommends build-essential clang-16 llvm-16-dev && \
	cd /tmp/pgvector && \
	make clean && \
	make OPTFLAGS="" && \
	make install datadir=/opt/extension/share/ pkglibdir=/opt/extension/lib/

FROM scratch

COPY --from=builder /opt/extension/lib/* /lib/
COPY --from=builder /opt/extension/share/extension/* /share/

Niccolò: I forked, for example, pgvector, That’s what we can do basically for every simple extensions that we can just build. This is a bit more complicated because we have to build from a trunk version of Postgres 18. So we have to compile pgvector from source, and then in a scratch layer we just archive the libraries and every other content that was previously built. But ideally whenever PG 18 comes out as a stable version of Postgres, we just need to apt install pgvector and grab the files from the path. Where it gets a bit more tricky is in the case of PostGIS, or TimescaleDB, or any extension whose library requires third party libraries. For example, PostGIS has a strong requirement on the geometric libraries, so you need to import them as well inside the mount volume. I can link you an example of the PostGIS one.

Gabriele: I think it’s important, we’ve got, I think Peter here, David as well, I mean, for example, if we could get standard ways in Postgres to generate Dockerfiles for extensions, that could be great. And as I said, these extensions can be used by any operator, not only CNPG.

David: That’s my POC does. It’s a patch against the PGXS that would build a trunk image.

Gabriele: This is the work that Niccolò had to do to make PostGIS work in the pilot project: he had to copy everything.

Niccolò: I think we can make it a little bit smoother and dynamically figure out everything from the policies library, so we don’t have to code everything like this, but this is just a proof of concept that it can work.

David: So you installed all those shared libraries that were from packages.

Niccolò: Yeah, they’re being copied in the same MountVolume where the actual extensions are copied as well. And then the pilot patch is able to set up the library path inside the pod so that it makes the libraries available to the system because of course, these libraries are only part of the MountVolume. They’re not injected inside the system libraries of the pod, so we have to set up the library path to make them available to Postgres. That’s how we’re able to use them.

David: So they end up in PKGLIBDIR but they still work.

Niccolò: Yeah.

Gabriele: I mean, there’s better ideas, better ways. As Niccolò also said, it was a concept.

David: Probably a lot of these shared libraries could be shared with other extensions. So you might actually want other OCI images that just have some of the libraries that shared between.

Gabriele: Yeah, absolutely. So we could work on a special kind of, extensions or even metadatas so that we can place, you know…

So, yeah, that’s it.

Jonathan: I think it’s important to invite everyone to try and test this, especially the Postgres trunk containers, when they want to try something new stuff, new like this one, just because we always need people testing. When more people review and test, it’s amazing. Because every time we release something, probably we’ll miss something, some extension like PostGIS missing one of the libraries that wasn’t included in the path. Even if we can try to find a way to include it, it will not be there. So testing, please! Test all the time!

Gabriele: Well, we’ve got this action now, they’re failing. I mean, it’s a bit embarrassing. [Cross talk.] We already have patch to fix it.

But I mean, this is a great project as I mentioned before, because it allows us to test the current version of Postgres, but also if you want to build from a Commitfest or if you’ve got your own Postgres repository with sources, you can compile, you can get the images from using this project.

Floor: Gabriele, did you want to talk about SBOMs?

Gabriele: I forgot to mention Software Bill of Materials. They’re very important. It’s kind of now basic for any container image. There’s also the possibility to add them to these container images too. This is very important. Again, in a change manager for security and all of that — in general supply chain. And signatures too. But we’ve got signature for packages as well. There’s also a attestation of provenance.

Floor: Very good, thanks everyone!

CBOR Tag for JSON Number Strings

For a side project, I’m converting JSON inputs to CBOR, or Concise Binary Object Representation, defined by RFC 8949, in order to store a more compact representation in the database. This go Go app uses encoding/json package’s UseNumber decoding option to preserve numbers as strings, rather tha float64s. Alas, CBOR has no support for such a feature, so such values cannot survive a round-trip to CBOR and back, as demonstrating by this example using the github.com/fxamacker/cbor package (playground)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
// Decode JSON number using json.Number.
input := bytes.NewReader([]byte(`{"temp": 98.6}`))
dec := json.NewDecoder(input)
dec.UseNumber()
var val map[string]any
if err := dec.Decode(&val); err != nil {
	log.Fatalf("Err: %v", err)
}

// Encode as CBOR.
data, err := cbor.Marshal(val)
if err != nil {
	log.Fatalf("Err: %v", err)
}

// Decode back into Go.
var newVal map[string]any
if err := cbor.Unmarshal(data, &newVal); err != nil {
	log.Fatalf("Err: %v", err)
}

// Encode as JSON.
output, err := json.Marshal(newVal)
if err != nil {
	log.Fatalf("Err: %v", err)
}

fmt.Printf("%s\n", output)

The output:

{"temp":"98.6"}

Note that the input on line 2 contains the number 98.6, but once the value has been transformed to CBOR and back it becomes the string "98.6".

I wanted to preserve JSON numbers treated as strings. Fortunately, CBOR uses numeric tags to identify data types, and includes a registry maintained by IANA. I proposed a new tag for JSON numbers as strings and, through a few iterations, the CBOR group graciously accepted the formal description of semantics and assigned tag 284 in the registry.

Now any system that handles JSON numbers as strings can use this tag to preserve the numeric representation in JSON output.

Here’s how to use the tag customization features of github.com/fxamacker/cbor to transparently round-trip json.Number values playground:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
// Create tag 284 for JSON Number as string.
tags := cbor.NewTagSet()
tags.Add(
    cbor.TagOptions{
        EncTag: cbor.EncTagRequired,
        DecTag: cbor.DecTagRequired,
    },
    reflect.TypeOf(json.Number("")),
    284,
)

// Create a custom CBOR encoder and decoder:
em, _ := cbor.EncOptions{}.EncModeWithTags(tags)
dm, _ := cbor.DecOptions{
    DefaultMapType: reflect.TypeOf(map[string]any(nil)),
}.DecModeWithTags(tags)

// Decode JSON number using json.Number.
input := bytes.NewReader([]byte(`{"temp": 98.6}`))
dec := json.NewDecoder(input)
dec.UseNumber()
var val map[string]any
if err := dec.Decode(&val); err != nil {
    log.Fatalf("Err: %v", err)
}

// Encode as CBOR.
data, err := em.Marshal(val)
if err != nil {
    log.Fatalf("Err: %v", err)
}

// Decode back into Go.
var newVal map[string]any
if err := dm.Unmarshal(data, &newVal); err != nil {
    log.Fatalf("Err: %v", err)
}

// Encode as JSON.
output, err := json.Marshal(newVal)
if err != nil {
    log.Fatalf("Err: %v", err)
}

fmt.Printf("%s\n", output)

Lines 1-16 contain the main difference from the previous example. They create a CBOR encoder (em) and decoder (dm) with tag 284 assigned to json.Number values. The code then uses them rather than the cbor package to Marshal and Unmarshal the values on lines 28 and 35. The result:

{"temp":98.6}

Et voilà! json.Number values are once again preserved.

I believe these custom CBOR encoder and decoder configurations bring full round-trip compatibility to any regular JSON value decoded by encoding/json. The other important config for that compatibility is the DefaultMapType decoding option on line 15, which ensures maps use string values for map keys rather the CBOR-default any values.

2025 GSOC: Mankirat Singh — ABI Compliance Reporting

I’m pleased to welcome Mankirat Singh to the Postgres community as a 2025 Google Summer of Code contributor. Mankirat will be developing an ABI compliance checker and reporting system to help identify and prevent unintentional ABI changes in future minor Postgres releases. This follows on the heels of the addition of ABI and API guidance in Postgres 18, as well as the ABI-breaking Postgres 17.1 release. What timing!

Please follow Mankirat’s blog as he develops the project this summer, under the mentorship of myself and Pavlo Golub. It should also soon be on Planet PostgreSQL. We’ve also set up the #gsoc2025-abi-compliance-checker channel on the community Slack for ad-hoc discussion. Join us!

Mini Summit 5: Extension Management in CNPG

Orange card with large black text reading “Extension Management in CNPG”. Smaller text below reads “Gabriele Bartolini (EDB)” and that is the date, “05.07.2025”.

The last Extension Ecosystem Mini-Summit is upon us. How did that happen?

Join us for a virtual conference session featuring Gabriele Bartolini, who will be discussing Extension Management in CNPG. I’m psyched for this one, as the PostgresSQL community has contributed quite a lot to improving extensions management in CloudNativePG in the past year, some of which we covered in previously. If you miss it, the video, slides, and transcript will appear here soon.

Though it may be a week or two to get the transcripts done, considering that PGConf.dev is next week, and featuring the Extension Ecosystem Summit on Tuesday, 13 May in Montreál, CA. Hope to see you there; be sure to say “hi!”

Mini Summit 4 Transcript: The User POV

Orange card with large black text reading “The User POV”. Smaller text above reads “04.23.2025” and below reads “Celeste Horgan (Aiven), Sonia Valeja (Percona), & Alexey Palazhchenko (FerretDB)”

On April 23, we hosted the fourth of five (5) virtual Mini-Summits that lead up to the big one at the Postgres Development Conference (PGConf.dev), taking place May 13-16, in Montreál, Canada. Celeste Horgan, Developer Educator at Aiven, Sonia Valeja, PostgreSQL DBA at Percona, and Alexey Palazhchenko, CTO FerretDB, joined for a panel discussion moderated by Floor Drees.

And now, the transcripts of “The User POV” panel, by Floor Drees

Introduction

My name is Floor, I’m one of the organizers of these Extension Ecosystem Mini-Summits. Other organizers are also here:

The stream and the closed captions available for the recording are supported by PGConf.Dev and their gold level sponsors, Google, AWS, Huawei, Microsoft, and EDB.

Next, and last in this series, on May 7 we’re gonna have Gabriele Bartolini talk to us about Extension Management in CloudNativePG. Definitely make sure you head over to the Meetup page, if you haven’t already, and RSVP for that one!

The User POV

Floor: For the penultimate edition of this series, we’re inviting a couple of Postgres extension and tooling users to talk about how they pick and choose projects that they want to use, how they do their due diligence and, their experience with running extensions.

But I just wanted to set the context for the meeting today. We thought that being in the depth of it all, if you’re an extension developer, you kind of lose the perspective of what it’s like to use extensions and other auxiliary tooling. You lose that user’s point of view. But users, maybe they’re coming from other ecosystems are used to, maybe a different, probably smoother experience. I’m coming from the Rails and Ruby community, so RubyGems are my one stop shop for extending functionality.

That’s definitely a completely different experience from when I started using Postgres extensions. That’s not to say that those ecosystems and NPM and PIP and WordPress don’t have their own issues, ut we can certainly learn from some of the differences between the ecosystems. Ultimately, what we want to cover today is the experience of using extensions in 2025, and what are our users' wishes for the future?

Celeste: Hello my name is Celeste, I am on the developer relations team at Aiven. I only really started using Postgres as a part of my job here at Aiven, but have been a much longer contributor to similar-sized ecosystems. I was really heavily involved in the Kubernetes ecosystem for quite a while. Kubernetes is an extensible-by-design piece of software, but it’s many, many generations of software development later than some of the concepts that Postgres pioneered. Thank you for having me, Floor!

Sonia: Hello everybody! I started working with PostgreSQL in the year 2012, and since then it has been a quite a journey. Postgres has been my primary database, and along with learning PostgreSQL, I learned the other database alongside. I learned Oracle, I learned SQLServer, but only from the perspective — which is important — to migrate from X database to PostgresSQL, as in Oracle to PostgreSQL migration, SQLServer to PostgreSQL migration. I learned about the other databases and I’m fortunate to work as a PostgreSQL developer, PL/pgSQL Developer, PostgreSQL DBA, onsite coordinator, offsite coordinator, sometimes a trainer. So, in and out, it has been like I’m breathing PostgreSQL since then.

Alexey: Thanks for having me! I first worked with Postgres in 2005. Fast forward to today and I am doing FerretDB, which is the open source MongoDB replacement built on top of PostgreSQL and also on top of the DocumentDB extension recently open-sourced by Microsoft. We provide this extension to our users, but also we consume this extension as users of that extension. Somewhere in between, between 2005 and now, I also worked at Percona. At Percona I worked on monitoring software and worked with pg_stat_statements and pg_stat_monitor, which is made by Percona and I have pretty much a lot of experience with Postgres extensions.

Floor: And you’re cheating a little on this panel, seeing as you are not only a user but also a provider. I definitely have some questions for you!

And y’all talked a little about your sort of experience with extensibility of other software or technology, and comparing that to the Postgres experience. Can you all talk about what the main differences are that you have observed with other ecosystems?

Celeste: I think as somebody who’s a bit of a newer Postgres user and I guess comes from a different community, the biggest thing that weirded me out, when I started working with Postgres, is that there’s no way to install an extension except to install it against your live database.

If you compare that to something like Kubernetes, which again has a rather robust extensibility ecosystem, both on the networking side of things, but also other aspects of it, the inherent software architecture makes it so that you have to plan out what you’re going to do, and then you apply a plan. In theory you can’t apply a plan or add extensions to Kubernetes that won’t work or will somehow break the system. Again, in theory, in practice things are more interesting.

But with Postgres and with databases in general, you’re always working with the live dataset, or at some point you have to work with the live dataset. So there’s no real way to test.

Sonia: Most of the other databases — apart from PostgreSQL, which I have worked with — most of them are licensed. So Oracle and SQLServer. When it comes to PostgreSQL, it’s an open source, so you do your own thing: you do the installation, do the checkout everything, which is open source, you can see the code, and things like that. But when it comes to other databases, I since it’s licensed, it is managed by the specific vendor, so you do not have rights to do anything else. The things which will be common, like you do the POC in both the databases before you actually implement it in the production environment.

Alexey: Floor, you mentioned RubyGems, and I was thinking that actually there is something similar between PostgreSQL extensions and RubyGems in a sense that RubyGems quite often extend built-in Ruby classes, and Postgres extensions could do the same. There is no separation between public and private inside PostgreSQL, it’s all just C symbols, no special mark, don’t touch the CPI, we are going to change it at central detail. Nothing like that. They try not to break compatibility needlessly, but on the other hand, you have to check all versions of your extensions with all separate versions of PostgreSQL. In that sense it’s quite similar, unlike some other languages where’s there’s better separation between internal private, if not on the compiler level, at least on like documentation level or something like that.

Celeste: That’s not necessarily a criticism of Postgres. I think it’s just that’s those were the tools available to Postgres as a community when Postgres was being developed. There are some advantages to that too, because, for lack of a better word, the lack of checks and balances let some Postgres extensions do very, very interesting things that would maybe not be possible under a more restricted framework.

Floor: The main difference I see between those two is that I know to go to RubyGems as my place to get my plugins — or my gems, in that case. Whereas with Postgres, they can live pretty much anywhere, right? There’s different directories and there’s different places where you can get your stuff and maybe there’s something that is in a private repo somewhere because that’s what another team at your company is working on. It’s a bit of a mess, you know? It’s really difficult to navigate, where maybe other things are lot less difficult to navigate because there’s just the single place.

I wanna talk a little bit about when you’re looking for an extension to do a certain thing for you. What do you consider when you’re looking for an extension or when you’re comparing some of its tooling? I wrote down a couple of things that you might be looking at, or what I might be looking at: maybe it’s docs and tutorials, maybe it’s “has it seen a recent release?” Has it seen frequent releases? Is there only one company that is offering this extension? Or is it multiple companies supporting this extension? Is it a community-built tool? Is it already in use by other teams in your company? So it’s something that has been tested out with your system, with your stack, and you feel like it’s something that you can easily adopt.

So what are some of the things for you that you definitely look at when you’re looking to adopt new tooling?

Celeste: I think the main thing you wanna look for when you’re looking at really any open source project, whether it’s an extension or not, is both proof points within the project, but also social proof. Proof points within the project are things that you mentioned, like is there documentation? Does this seem to be actively maintained? Is the commit log in GitHub moving? How many open issues are there? Are those open issues being closed over time? Those are project health indicators. For example, if you look at the CHAOSS Project, Dawn Foster has done a ton of work around monitoring project health there.

But I think the other half of this — and this was actually something we worked on a lot at the Cloud Native Computing Foundation when I was there, and that work continues — is — and this makes a bit more sense in some cases than others — is social proof. So, are there other companies using it? Can you point to case studies? Can you point to case studies of something being in production? Can you point to people giving conference talks where they mention something being in use?

This becomes really important when you start thinking about things being enterprise-grade, an when you start thinking about the idea of enterprise-grade open source. Everybody who’s on this panel works for a company that does enterprise-grade open source database software, and you have to ask yourself what that means. A lot of what that means is that other enterprises are using it ,because that’s means that something comes to a certain level of reliability.

Sonia: I would like to add some things. What I look at is how difficult or how easy it is to install, configure, and upgrade the extension, and, whether it needs restart of the database service or not. Why do I look at the restart aspect? Because when I install it or configure or upgrade or whatever activity I perform with it, if it requires the restart, that means it is not configured online, so I need to involve other folks to do the database restart, as in an application is connecting to it. When I restart, it goes for a maintenance window for a very small time — whatever duration it goes offline, the database service. So whether it requires restart or not, that is also very important for me to understand.

Apart from the documentation, which should be of course easy to understand. That is one of the aspects while you install and configure. It should not be that difficult that I need to refer every time, everything, and do it, and then maybe, I might need to create another script to use it. It should not be the case. I look to those aspects, as well.

Apart from that, I also see how do I monitor the activities of this extension, like whether it is available in the logs — what that extension is doing. So it should not break my existing things basically. So how stable and how durable it is, and I should be able to monitor the activities, whatever that extension is doing.

From the durability perspective, even if I’m not able to monitor via logs, it should be durable enough to that it should not break anything else, which is up and running.

One more thing. I will definitely perform the POC, before putting it into the production, into some lower environment or in my test environment somewhere else.

Floor: How do you figure out though, how easy something is to sort of set up and configure? Are you looking for that information from a README or some documentation? Because I’ve definitely seen some very poorly documented stuff out there…

Sonia: Yeah, documentation is one aspect. Apart from that, when you do the POC, you will actually using you’ll be actually using that. So with that POC itself, you’ll be able to understand how easy it is to install, configure, and use it.

Alexey: For me as a user, I would say the most important thing is whatever extension is packaged and easy to install. And if it’s not packaged in the same way as PostgreSQL is packaged. For example, if I get PostgreSQL from my Ubuntu distribution, if extension is not in the same Ubuntu target, it might as well not exist for me because there is no way I’m going to compile it myself. It’s like hundreds of flags and that being C, and okay, I can make it 1% faster, but then it’ll be insecure and will bring PostgreSQL down, or worse. So there are a lot of problems like that.

If it’s not a package, then I would just probably just do something which is not as good, not as stable, but I will do it myself and will be able to support them using some third party extensions that is not packaged properly. And properly for me, is the high bar. So if it’s some third party network of extensions, that might be okay, I will take a look. But then of course, if it’s in the Ubuntu repository or Debian repository, that would be of course, much better.

Floor: I think that’s the build versus buy — or not necessarily buy if it’s open source. Not to say that open source is free. But that’s the discussion, right? When do you decide to spend the time to build something over adopting something? And so for you, that’s mainly down to packaging?

Alexey: For me that’s the most important one because for features we generally need to use in the current job and previous jobs, there are enough hooks on the PostgreSQL itself to make what we want to do ourselves. Like if sometimes we need to parse logs, sometimes we need to parse some low level counters, but that’s doable and we could do it in a different language and in the way we can maintain it ourselves. If you talk about PostgreSQL, I typically recommend C and if there’s some problem, we will have a bigger problem finding someone to maintain it, to fix it fast.

Floor: Alright When you build it yourself, would you then also open-source it yourself and take on the burden of maintenance?

Alexey: I mean that really depends on the job. Like at Percona we open sourced pg_stat_monitor. But that was like, implicit goal of making this extension open source to make it like a superset of pg_stat_statement. In FerretDB of course, DocumentDB is open source — we contribute to it, but I couldn’t say that’s easier. Of course if it was written like in our perfect language, Go, it would be much, much easier. Unfortunately, it’s not. So we have to deal with it with packaging and what not.

Floor: I guess it’s also like build versus buy versus fork because there’s definitely different forks available for a similar tooling that is just optimized for a little bit of a different use case. But again, that’s then another project out there that needs to be maintained.

Alexey: But at the same time, if you fork something, and don’t want to contribute back, you just don’t have this problem of maintaining it for someone else. You just maintain it for yourself. Of course, like if someone else in upstream wants to pull your changes, they will be able to. And then when they look at you like you’re a bad part of the community because you don’t contribute back, but that depends on the size of the company, whatever you have the sources and all that.

Celeste: But now you’re touching on something that I feel very strongly about when it comes to open source. Why open source anything to begin with? If we can all just maintain close forks of everything that we need, why is Postgres open source to begin with and why does it continue to be open source and why are we having this discussion 30 or 40 years into the lifespan of Postgres at this point?

The fact of the matter is that Postgres being open source is the reason that we’re still here today. Postgres is a 30 plus year old database at this point. Yes, it’s extremely well architected because it continues to be applicable to modern use cases when it comes to data. But really the fundamental of the matter is that it is free, and being free means that two things can happen. One, it’s a very smart move for businesses to build a business on top of a particular piece of software. But two — and I would argue that this is actually the more important point when it comes to open source and the long term viability of open source — is that because it is free, that means it is A) proliferative, it has proliferated across the software industry and B) it is extremely valuable for professionals to learn Postgres or to learn Kubernetes or to learn Linux because they know that they’re gonna encounter that sometime in their career.

So when it comes to extensions, why open source an extension? You could simply close source an extension. It’s the same reason: if you use open source extensions, you can then hire for people who have potentially encountered those extensions before.

I work for a managed service provider that deploys quite a few Postgreses for quite a few clients. I obviously have a bit of a stake in the build versus buy versus fork debate that is entirely financial and entirely linked to my wellbeing. Regardless, it still makes sense for a company like Aiven to invest in open source technologies, but it makes a lot more sense for us to hire Postgres experts who can then manage those extensions and manage the installation of those extensions and manage whether your database works or not against certain extensions, than it is for literally every company out there on the planet to hire a Postgres professional. There’s still a use case for open-sourcing these things. That is a much larger discussion though, and I don’t wanna derail this panel. [Laughs.]

Floor: I mean, if Alexey is game, you got yourself a conversation.

Alexey: First of all, I completely agree with you and I of course built my whole carrier on open source. But there’s also the other side. So let’s say you build an open source extension which is very specific, very niche, solves your particular problem. And there are like 20 other people who are like, you have the same problem, and then all 20 come to your GitHub and ask questions about it. And they do it for free. You just waste your time supporting them essentially. And you are a small company, you are just three people and you open-source this extension just for fun. And they are three people and two of them work full time and support that.

Celeste: Oh yeah, no, I didn’t say the economics of this worked out for the people doing the open-sourcing, just to be perfectly clear. I think a much larger question around the sustainability of open source communities in general. Postgres, the overall project, and say, for example, the main Kubernetes project, are outliers in terms of the amount of support and the amount of manpower and people and the energy they get. Whereas most things that get open-sourced are — I think Tidelift had a survey: the average maintainer size for any given open source project is one. That is a much larger debate though. Realistically it makes a lot of sense, particularly for larger companies, to use open source software, Postgres included, because it accelerates their time to innovation. They don’t need to worry about developing a database, for example. And if they’re using Postgres and they decide they want time series data, they don’t need to worry about migrating to a time series database when they can just use Timescale.

However, “are they contributing back to those projects?” becomes a really big question. I think the next questions that Floor would like to lead us to, amd I’m just going to take the reins here, Floor —

Floor: Are you taking my job??

Celeste: Hardly, hardly, I could never! My understanding of why we’re having this series of conversations that’s around the sustainability of the Postgres extensions ecosystem,is that there’s a governance question there as well. For the end user, the ideal state for any Postgres extension is that they’re blessed and vetted by the central project. But as soon as you start doing that, you start realizing how limited the resources in even a massive project like Postgres are. And then you start asking: Where should those people be coming from? And then you start thinking: There are companies like Microsoft out there in the world that are hiring a lot of open source contributors, and that’s great, but… What about the governments? What about the universities? What about the smaller companies? The real issue is the manpower and there’s only so far you can go, as a result of that. There’s always sustainability issues around all open source, including Postgres extensions, that come down to the sustainability of open source as a whole and whether or not this is a reasonable way of developing software. Sorry to get deep. [Laughs.]

Floor: Yeah, I think these are discussions that we’re definitely having a lot in the open source community, and in the hallway at a lot of conferences.

We’re gonna open it up to audience questions too in a minute. So if people want to continue talking about the drama that is open source and sustainable open source, we can definitely continue this discussion.

Maybe going back a little bit, Alexey, can we talk a little bit about — because you’re also a provider — what your definition of “done” is or what you wanna offer your users at minimum when you do decide to open-source some of your stuff or make available some of some of your stuff.

Alexey: As an open source company, what we do, we just publish our code on GitHub and that’s it. It’s open source, that’s done. Knock yourself out and if you want some support, you just pay us, and then we will. That’s how we make money. Well, of course not. That’s more complicated than that, and I wish it was like to some degree, sometimes. Now there still a lot of users who just come and ask for questions for free, and you want to support them because you want to increase adoption and all that.

The same with extensions. So as I just described the situation, of course, that was a bit like, not to provoke a discussion, but, let’s say you built a PostgreSQL extension, you need to have some hooks in the core that ideally would be stable, don’t change between versions as we discussed. That’s a bit of a problem. PostgreSQL, no separation between private and public API. Then how do you install? You need to package it some way that is the same way as your current PostgreSQL version is packaged. There is no easy way, for example, to extend a version of PostgreSQL, which is a part of Docker, you just build your own container.

Celeste: I’ll segway into the point that I think I was supposed to make when we were talking about extensions ecosystem, as opposed to a rant about the sustainability of open source, which I am unfortunately always down to give. Here’s the thing with extensions ecosystems. For the end user, it is significantly more beneficial if those extensions are somehow centrally-controlled. If you think about something like RubyGems or the Python package installer or even Docker to a certain extent, those are all ways of centralizing. Though with some of the exploits that have gone on with NPM recently, there are obviously still problems there.

I mentioned, there’s always staffing problems when it comes to open source. Assigning somebody to approve every single extension under the sun isn’t really sustainable from a human perspective. The way that we handle this in the Kubernetes community — particularly the container network interfaces, of which there are many, many, many — is we effectively manage it with governance. We have a page on the documentation in the website that says: here are all the container network interfaces that have chosen to list themselves with us. The listings are alphabetical, so there is no order of precedence.

The community does not take responsibility for this code because we simply cannot. In being a container network interface, it means that they implement certain functionalities, like an interface in the programming sense. We just left it at that. That was the solution that the Kubernetes community came to. I don’t know if that’s the solution that the Postgres community will eventually come to, but community governance is a huge part of the solution to that problem, in my opinion.

Alexey: I think one big difference between NPM and NodeJS ecosystem in general, and, for example, Postgres extensions, is that NPM was so popular and there are so many packages mostly because NodeJS by itself is quite small. The core of NodeJS is really, really small. There is now standard library and a lot of functionality is external. So I would say as long as your core, like PostgreSQL or Ruby or Kubernetes is large enough, the amount of extensions will be limited just by that. Because many people will not use any extensions, they will just use the core. That could solve a problem of waiting and name-squatting, but just by itself. I would say PostgreSQL more or less solves this problem to some degree.

Floor: Before we open up for some questions from participants, Sonia, in a previous call, shared a little bit of a horror story with us, with wanting to use a certain extension and not being able to. I think this is something that other people can resonate with, having been through a similar thing. Let’s hear that story, And then, of course, Celeste, Alexey, if you have similar stories, do share before we open up for questions from the rest of the peeps joining here.

Sonia: So there was this requirement to transfer data from one database to another database, specifically with respect to PostgreSQL. I wanted to transfer the data from the production environment to some other environment, or internally within the non-production environments. I created this extension called dblink. I’m talking about way back, 2012, 2013, somewhere, when I started working with PostgreSQL, I used that extension. When you configure that extension, we need to give the credentials in a human readable format. And then, at times it also gets stored in the logs or somewhere.

I mean, even if it is not storing the logs, what the security team or the audit team mentioned was that since it is using the credentials in a human readable format, this is not good. And if somebody has has access to X database, they also get the access to the Y database or the Y cluster. And what if it goes to the production environment and then somebody can just steal the data, without us even knowing it. It’ll not get logged inside the logs, that somebody has accessed my production database via non-production database. So that’s not good, and was not acceptable by the auditors.

I love that extension today also, because without doing any scripting or anything, you just access one database from another database and then get whatever you want. But then as a developer, it might be very easy for me to use that thing. But then as an other person who is trying to snoop into your production database or the other data of anything, it’s easy for them. So we were asked not to use that extension specifically, at least not to connect to the production environment.

I was working for a taxation project. It was a financial critical data, and they did not want it to have any risk of anybody reaching to that data because it was the numbers, the financial figures, and was critical. So that’s the reason we were refrained from using it for that particular project. But then other projects, which were not that critical, I somehow managed to convince them to use it. [Laughs.]

Floor: So it’s sometimes you will choose it for convenience and it’s acceptable risk, and then there might be restrictions from other teams as well. Thanks for sharing that. If anyone wants to un-mute and ask questions or share their own horror stories, you’re now very welcome to.

Yurii: There was a really good point about extensions being available as part of your operating system environment, for example Ubuntu packages or Red Hat packages. This is where we still have a lot of difficulty in general, in this ecosystem. Obviously PGDG is doing an amazing job capturing a fraction of those extensions. But because it is a complicated job, oftentimes unpaid, people are trying to make the best out of it. On the one hand, it does serve as a filter, as in only the best of the best extensions that people really use get through that filter and become part of PGDG distribution. But it also creates an impediment. For example, PGDG is not always able to update them as the releases come out. Oftentimes people do need the latest, the best releases available, and not when the packagers have time.

The other problem is how do extensions become popular if they’re not there in the first place? It creates that kind of problem where you’re stuck with what you have. And there’s a problem with a discovery: how do I find them? And how do I trust this build? Or can I even get those builds for my operating system?

Obviously there are some efforts that try to mitigate that by building a docker container and you run them with just copies of those files. But obviously there’s a demand for a native deployment method. That is, if I deploy my Postgres this way — say using RPM in my Red Hat-based distro, or Debian based — I want everything else to fall into that. I don’t want a new system.

I think we, we still have a lot of work to do on that end. I’ve been putting some effort on our end to try and find how can we save a packager’s time that has basically decreased the amount of work that that needs to be done. Can we go essentially from, here’s the URL for the extension, figure it out. Like 80% of them can, we just figure them out and package them automatically, and repackage them when new versions come out, an only assign people on them for the remaining 20% that are not building according to a certain convention. So they need some attention.

This way we can get more extensions out and extract more value out of these extensions. By using them, we’re helping the authors gain a wider audience and effectively create value for everybody in the community. Otherwise, they would feel like, “I can’t really promote this as well as I would’ve loved to, like another ecosystems — RubyGems were mentioned today, and NPM, etc. It’s easy to get your stuff out there. Whereas in the Postgres community, it is not easy to get your stuff out there. Because there are so many risks associated with that, we are oftentimes working with production data, right?

We need to make sure there is less friction on any other side. We need to get these extensions to get considered. That’s at least one of the points that I wanted to mention. I think there’s a lot to be done and I really hope that the conference next month in Montréal will actually be a great place to get the best minds together again and hash out some of the ideas that we’ve been discussing in the past number of months.

Floor: David, do you wanna ask your question of where people go to learn more about extensions and find their extensions?

David: This is something that I tried to solve a while ago with a modicum of success — a bit. My question is, where do you all go to learn more about extensions? To find out what extensions are available or, is there an extension that does X, Y, Z? How do you find out if there is and, then evaluate it? Where do you go?

Alexey: I generally just search, I guess. I don’t go to anything. The last place I generally research and quite often I learned on some blog post on sometimes on GitHub itself.

Celeste: If you think about that project-level activity proof, and then the social proof, I think that Postgres actually has a really unique advantage compared to a lot of other open source projects because it’s been going for so long and because there is a very entrenched community. It’s very easy to find social proof for basically anything Postgres-related that you might want.

If you do a search for, like, “I want a Postgres extension that does X”, you’re going to get comparatively better Google search results because there’s years and years and years of search results in some cases. However, that does come with the equal and opposite problem of when you have maintenance issues, because things have been going for years and years, and you don’t know whether things have been maintained or not.

I’m thinking about this from an open source management perspective, and as somebody who is not necessarily involved in the open source development of Postgres. I think there is a case that you could make for some amount of community vetting of some extensions and publicizing that community-vetting, and having a small subset of — this has some sort of seal of approval, it’s not gonna like nuke your database. To a certain extent, I think Postgres already does that, because it does ship with a set of extensions by default. In shipping with those extensions, it’s effectively saying the upstream Postgres community blesses these, such that we will ship Postgres with them because we are pretty confident that these are note going to nuke your database.

When I was at the CNCF, I supported a whole bunch of different open source projects. I was everybody’s documentation girl. So I’m trying to throw things at them and then hopefully you can talk about them in Montréal and maybe something useful will come of it. Another thing that you can use is almost like an alpha beta experimental sort of feature where you define some set of criteria for something being alpha or experimental, you define some set of criteria that if met, they can call themselves beta, you define some set of criteria of something being “production ready” for an extensions ecosystem. Then you can have people submit applications and then it’s less of a mad rush.

I guess if I had any advice — not that Postgres needs my Charlton advice — it would be to think about how you wanna manage this from a community governance perspective, or else you will find yourself in utter mayhem. There’s a reason that the Kubernetes container network interface page specifies that things have to be listed in alphabetical order. It’s because there was mayhem until we decided to list things in alphabetical order. It seems completely silly, but it is real. [Laughs.]

Alexey: So my next project is going to start with “aa”.

Sonia: Yeah, what Celeste said. I will research about it online, normally, and I will find something and, if I get lots of options for doing X thing, a lot of extensions, I will go and search the documentation on postgresql.org and then try to figure out which one is the one to start with my POC.

Celeste: Let me flip the question for you, Sonia. In an ideal world. If you were to try and find an extension to use for a particular task, how would you find that extension?

Sonia: Normally I will research it, Google it most of the times, and then try to find out —

Celeste: But pretend you don’t have to Google it. Pretend that maybe there’s a website or a resource. What would your ideal way of doing that be? If you had some way that would give you more of a guarantee that it was trustworthy, or would make it easier to find, or something. Would it be a tool like RubyGems? Would it be a page on the Postgres website’s documentation?

Sonia: Page! The PostgreSQL website documentation. The Postgres documentation is like a Bible for me, so I keep researching on that. In fact, previously when you used to Google out anything, you used to get the initial link as the postgresql.org, the website. Nowadays you don’t get the link as a first link, but then I will scroll down to the page. I will try to figure out where it is postgresql.org and then go there. That’s the first thing. Now since I’ve been into the field, since a very long time, then I know, okay, this website is authentic, I can go and check out the blogs, like who else has used it or what is their experience or things like that.

Jay Miller: I have to ask this only because I am new to thinking about Postgres outside of how I interact with it from a web developer’s perspective. Usually I use some ORM, I use some module. I’m a Python developer, so I use Python, and then from there, I don’t think about my database ever again.

Now I want to think about it more. I want to have a very strong relationship with it. And we live in a world where you have to say that one of the answers is going to be AI. One of the answers is I search for something, I get some AI response, and, and here’s like the…

David in comments: SLOP.

Jay: Exactly, this is the problem. If I don’t know what I should do and I get a response, when the response could have just been, “use this extension, it does everything you need to do and it makes your life so much easier.” Instead, I wind up spending days, if not weeks, going in and fighting against the system itself. Sonia, you mentioned having that experience. The idea or the ability to discern when to go with some very kludgey PostgreSQL function that makes your life miserable, to, “oh, there’s an extension for this already! I’m just going to use that.” How do you expose that to people who are not dumb, they’re not vibe coding, they just finally have a reason to actively think about what their database is doing behind the scenes.

Sonia: If I understood your question correctly, you wanted to explore what kind of activities a specific extension is doing.

Jay: I would just love the like, “hey, you’re trying to do a thing, this has already been solved in this extension over here, so you don’t have to think about it.” Or “you’re trying to do something brand new, no one’s thought about this before, or people have thought about it before and talked about how much of a pain it is. Maybe you should create an extension that does this. And here’s the steps to do that.” Where is the proper documentation around coming to that decision, or the community support for it?

Sonia: That’s a great question to discuss inside the community, to be honest. Like, how do we go about that?

David: Come to Montréal and help us figure it out.

Jay: I was afraid of that answer. I’ll see you in New York, or hopefully Chicago on Friday.

Floor: Fair enough, but definitely a wonderful question that we should note down for the discussion.

Sonia: One thing which I want to add, this just reminded me of. There was one podcast which I was listening with Robert Haas. The podcast is organized by one of the Microsoft folks. The podcast was revolving around how to commit inside the PostgreSQL, or how to read what is written inside the PostgreSQL and the ecosystem around that. The questions were related to that. That could also help. And of course, definitely when you go to a conference, which we are discussing at the moment, there you’ll find a good answer. But listening to that podcast will help you give the answers to an extent.

Floor: I think that’s Talking Postgres with Claire Giordano, or if it was the previous version, it was the “Path to Citus Con”, because that was what it was called before.

David: The summit that’s in Montréal on May 13th is an unconference session. We have a limited amount of time, so we want to collect topic ideas and ad hoc votes for ideas of things to discuss. Last year I used a website with Post-Its. This year I’m just trying a spreadsheet. I posted a link to the Google Sheet, which anybody in the world can access and pollute — I mean, put in great ideas — and star the ideas they’re really interested in talking about. And I’d really appreciate, people contributing to that. Good topics came up today! Thank you.

Floor: Thanks everyone for joining us. Thank you for our panelists specifically, for sharing their experiences.

Update Your Control Files

Reviews of the extension search path patch, now committed and slated for PostgreSQL 18, revealed a few issues with extension configuration. Based on the ensuing discussion, and even though PostgreSQL 18 will include workarounds, it’s best to make adjustments to the extensions you maintain, the better to serve existing PostgreSQL versions and to hew closer to best practices.

Thus, a couple of recommendations for extension maintainers.

  1. Remove the $libdir/ prefix from the module_pathname directive in the control file. The $libdir/ requires extension modules to live in pkglibdir (see pg_config), and no other directories included in dynamic_library_path, which limits where users can install it. Although PostgreSQL 18 will ignore the prefix, the docs will also no longer recommend it.

  2. Remove the directory parameter from the control file and the MODULEDIR directive from the Makefile. Honestly, few people used these directives, which installed extension files in subdirectories or even completely different absolute directories. In some cases they may have been useful for testing or extension organization, but the introduction of the extension search path alleviates its use cases.

These changes will future-proof your extensions and make them better ecosystem citizens. Plus, they clean out some otherwise funky configurations that just aren’t necessary. Make the changes today — and while you’re at it, test your extensions with PostgreSQL 18 pre-releases!

Look, I’ll go first.

Mini Summit 4: The User POV

Orange card with large black text reading “The User POV”. Smaller text above reads “04.23.2025” and below reads “Celeste Horgan (Aiven), Sonia Valeja (Percona), & Alexey Palazhchenko (FerretDB)”

And we’re back.

This Wednesday, April 9 at noon America/New_York (16:00 UTC) for Extension Mini Summit #4, where our panel consisting of Celeste Horgan (Aiven), Sonia Valeja (Percona), and Alexey Palazhchenko (FerretDB) will discuss “The User POV”. This session will be a terrific opportunity for those of us who develop extensions to get an earful from the people who use them, in both anger and joy. Bang on the Meetup to register for this live video session.

Fix Postgres strchrnul Compile Error on macOS 15.4

Just a quick note to users of pgenv and anyone else who compiles Postgres on macOS. In macOS 15.4, Apple introduced a new API, strchrnul, which is common from other platforms. As a result attempting to compile Postgres on 15.4 and later will lead to this error:

snprintf.c:414:27: error: 'strchrnul' is only available on macOS 15.4 or newer [-Werror,-Wunguarded-availability-new]
  414 |                         const char *next_pct = strchrnul(format + 1, '%');
      |                                                ^~~~~~~~~
snprintf.c:366:14: note: 'strchrnul' has been marked as being introduced in macOS 15.4 here, but the deployment target is macOS 15.0.0
  366 | extern char *strchrnul(const char *s, int c);
      |              ^
snprintf.c:414:27: note: enclose 'strchrnul' in a __builtin_available check to silence this warning

Tom Lane chased down and committed the fix, which will be in the next releases of Postgres 13-17. It should also go away once macOS 16.0 comes out. But in the meantime, set MACOSX_DEPLOYMENT_TARGET to the current OS release to avoid the error:

export MACOSX_DEPLOYMENT_TARGET="$(sw_vers -productVersion)"

If you use pgenv, you can add it to your configuration. It will need to be added to all the version configs, too, unless they don’t exist and you also set:

PGENV_WRITE_CONFIGURATION_FILE_AUTOMATICALLY=no