Normal view

Khrys’presso du lundi 30 mars 2026

30 March 2026 at 05:42

Comme chaque lundi, un coup d’œil dans le rétroviseur pour découvrir les informations que vous avez peut-être ratées la semaine dernière.


Tous les liens listés ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez à activer votre bloqueur de javascript favori ou à passer en “mode lecture” (Firefox) ;-)

Brave New World

 

Spécial IA

Spécial guerre(s) au Moyen-Orient

Spécial femmes dans le monde

RIP

Spécial France

RIP

  • Lionel Jospin : la gauche d’un autre temps (politis.fr)

    Certains y verront un symbole facile. La disparition, ce 23 mars, de Lionel Jospin, l’homme qui a incarné « la gauche plurielle », survient au lendemain d’élections municipales qui ont plutôt validé la thèse des gauches irréconciliables.

    Image

  • Lionel Jospin, un héritage féministe inscrit dans la loi (lesnouvellesnews.fr)

    Parité, IVG, lutte contre les VSS, féminisation du langage, égalité pro, congé paternité… Lionel Jospin a inscrit dans la loi plusieurs réformes féministes majeures qui structurent encore les politiques d’égalité.

Spécial femmes en France

Spécial médias et pouvoir

Spécial emmerdeurs irresponsables gérant comme des pieds (et à la néolibérale)

Spécial recul des droits et libertés, violences policières, montée de l’extrême-droite…

Spécial résistances

Spécial outils de résistance

Spécial GAFAM et cie

Les autres lectures de la semaine

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

Les trucs chouettes de la semaine

Image

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Thibault Martin: I realized that You don't care

29 March 2026 at 16:00

Quite a few of us maintain our own websites and publish our thoughts. We play in hard mode:

  • We need to build our website before even publishing our first post.
  • We don’t benefit from the network effect of bigger platforms to get eyeballs on our writing.
  • LLMs aggressively scrape the web and can serve our thoughts or expertise to their users without them visiting our websites.

And on top of that, you don’t care.

And I don’t expect you to care. Like the rest of us, you are flooded with information constantly. You’re fed so many words that you read the equivalent of whole books every day. How entitled would I be to expect you to care about my words when you have to filter through every story you’re bombarded with.

So why do we keep the small web alive?

I can’t speak for others, but I know why I maintain my website and why I publish my thoughts there. By increasing order of importance:

  1. I keep my web development skills reasonably up to date.
  2. I can shape my website to adapt to my content, and not the other way around.
  3. I have freedom of tone and vocabulary. I don’t have to censor words like "suicide" or "sex".
  4. I write long form posts that help me shape my thoughts, develop ideas, and receive feedback from my peers and readers.

If you can afford to, I can only encourage you to write and publish your thoughts on your own platform, as long as you don’t expect others to care in return.

Gedit Technology: gedit 50.0 released

28 March 2026 at 10:00

gedit 50.0 has been released! Here are the highlights since version 49.0 from January. (Some sections are a bit technical).

No Large Language Models AI tools

The gedit project now disallows the use of LLMs for contributions.

The rationales:

Programming can be seen as a discipline between art and engineering. Both art and engineering require practice. It's the action of doing - modifying the code - that permits a deep understanding of it, to ensure correctness and quality.

When generating source code with an LLM tool, the real sources are the inputs given to it: the training dataset, plus the human commands.

Adding something generated to the version control system (e.g., Git) is usually frown upon. Moreover, we aim for reproducible results (to follow the best-practices of reproducible builds, and reproducible science more generally). Modifying afterwards something generated is also a bad practice.

Releasing earlier, releasing more often

To follow more closely the release early, release often mantra, gedit aims for a faster release cadence in 2026, to have smaller deltas between each version. Future will tell how it goes.

The website is now responsive

Since last time, we've made some efforts to the website. Small-screen-device readers should have a more pleasant experience.

libgedit-amtk becomes "The Good Morning Toolkit"

Amtk originally stands for "Actions, Menus and Toolbars Kit". There was a desire to expand it to include other GTK extras that are useful for gedit needs.

A more appropriate name would be libgedit-gtk-extras. But renaming the module - not to mention the project namespace - is more work. So we've chosen to simply continue with the name Amtk, just changing its scope and definition. And - while at it - sprinkle a bit of fun :-)

So there are now four libgedit-* modules:

  • libgedit-gfls, aka "libgedit-glib-extras", currently for "File Loading and Saving";
  • libgedit-amtk, aka "libgedit-gtk-extras" - it extends GTK for gedit needs at the exception of GtkTextView;
  • libgedit-gtksourceview - it extends GtkTextView and is a fork of GtkSourceView, to evolve the library for gedit needs;
  • libgedit-tepl - the Text Editor Product Line library, it provides a high-level API, including an application framework for creating more easily new text editors.

Note that all of these are still constantly in construction.

Some code overhaul

Work continues steadily inside libgedit-gfls and libgedit-gtksourceview to streamline document loading.

You might think that it's a problem solved (for many years), but it's actually not the case for gedit. Many improvements are still possible.

Another area of interest is the completion framework (part of libgedit-gtksourceview), where changes are still needed to make it fully functional under Wayland. The popup windows are sometimes misplaced. So between gedit 49.0 and 50.0 some progress has been made on this. The Word Completion gedit plugin works fine under Wayland, while the LaTeX completion with Enter TeX is still buggy since it uses more features from the completion system.

Thibault Martin: I realized that I created too much friction to publish

28 March 2026 at 10:00

I love writing on my blog. I love taking a complex topic, breaking it down, understanding how things work, and writing about how things clicked for me. It serves a double purpose:

  1. I can organize my thoughts, ensure I understood the topic fully, and explain it to others.
  2. It helps my future self: if I forgot about the topic, I can read about what made it click for me.

But as of writing, the last time I published something on my blog was 5 months ago.

The blogging process

My blog posts tend to be lengthy. My writing and publishing process is the following.

  1. Take a nontrivial topic, something I didn't know about or didn't know how to do.
  2. Understand it, break it down, and get a clear picture of how things work.
  3. Write an outline for the post with the key points.
  4. Ask my smarter friends if the outline makes sense.
  5. Flesh out the outline into a proper blog posts, with all the details, code snippets, screenshots.
  6. Ask my smarter friends to review the post again.
  7. Get an illustrator to create a banner for the post, that also serves as an opengraph preview image.
  8. Publish the post.

That is a lot of work. I have many posts stuck between step 3 and 5, because they take quite a bit of time. Asking an illustrator to create a banner for the post also creates more friction: obviously I need to pay the illustrator, but I also need to wait for him to be done with the illustration.

Not everything has to be a blog post

Sometimes I have quick thoughts that I want to jot down and share with the rest of the world, and I want to be able to find it back. There are two people I follow that write a lot, often in short format.

  1. John Gruber on his blog Daring Fireball.
  2. Simon Willison, on his Weblog.

Both of them have very short format notes. Willison even blogged about what he thinks people should write about.

Reducing friction and just posting

I don't think friction should be avoided at all costs. Take emails for example: there's a delay between when you send a message and your peer receives it, or the other way around. That friction encourages longer form messages, which gives more time to organize thoughts.

I also welcome the friction I have created for my own posts: I get through a proper review process and publish higher quality posts.

But there's also room for spontaneity. So I've updated my website to let me publish two smaller formats:

  • TILs. Those are short posts about something I've learned and found interesting.
  • Thoughts. Those are shorter posts I jot down in less than 20 minutes to develop simple thoughts.

Sebastian Wick: Three Little Rust Crates

27 March 2026 at 00:15

I published three Rust crates:

  • name-to-handle-at: Safe, low-level Rust bindings for Linux name_to_handle_at and open_by_handle_at system calls
  • pidfd-util: Safe Rust wrapper for Linux process file descriptors (pidfd)
  • listen-fds: A Rust library for handling systemd socket activation

They might seem like rather arbitrary, unconnected things – but there is a connection!

systemd socket activation passes file descriptors and a bit of metadata as environment variables to the activated process. If the activated process exec’s another program, the file descriptors get passed along because they are not CLOEXEC. If that process then picks them up, things could go very wrong. So, the activated process is supposed to mark the file descriptors CLOEXEC, and unset the socket activation environment variables. If a process doesn’t do this for whatever reason however, the same problems can arise. So there is another mechanism to help prevent it: another bit of metadata contains the PID of the target. Processes can check it against their own PID to figure out if they were the target of the activation, without having to depend on all other processes doing the right thing.

PIDs however are racy because they wrap around pretty fast, and that’s why nowadays we have pidfds. They are file descriptors which act as a stable handle to a process and avoid the ID wrap-around issue. Socket activation with systemd nowadays also passes a pidfd ID. A pidfd ID however is not the same as a pidfd file descriptor! It is the 64 bit inode of the pidfd file descriptor on the pidfd filesystem. This has the advantage that systemd doesn’t have to install another file descriptor in the target process which might not get closed. It can just put the pidfd ID number into the $LISTEN_PIDFDID environment variable.

Getting the inode of a file descriptor doesn’t sound hard. fstat(2) fills out struct stat which has the st_ino field. The problem is that it has a type of ino_t, which is 32 bits on some systems so we might end up with a process identifier which wraps around pretty fast again.

We can however use the name_to_handle syscall on the pidfd to get a struct file_handle with a f_handle field. The man page helpfully says that “the caller should treat the file_handle structure as an opaque data type”. We’re going to ignore that, though, because at least on the pidfd filesystem, the first 64 bits are the 64 bit inode. With systemd already depending on this and the kernel rule of “don’t break user-space”, this is now API, no matter what the man page tells you.

So there you have it. It’s all connected.

Obviously both pidfds and name_to_handle have more exciting uses, many of which serve my broader goal: making Varlink services a first-class citizen. More about that another time.

This Week in GNOME: #242 Shuffling Cards

27 March 2026 at 00:00

Update on what happened across the GNOME project in the week from March 20 to March 27.

GNOME Releases

Sophie (she/her) reports

GNOME 48.10 has been released. This is the final release for GNOME 48. If you are still using the GNOME 48 runtime on Flathub, you can update to the GNOME 50 runtime directly. The GNOME 48 runtime will be marked as end of life (EOL) on April 11. Apps that are still using the runtime at this point will trigger warnings for their users.

GNOME Core Apps and Libraries

Khalid Abu Shawarib reports

Version 50 of Fonts was released this week!

This release includes a redesigned fonts preview grid that is more responsive when scrolling, and have a uniform text baseline.

Moreover, the search bar is now always visible, and supports type-to-search in the main font preview grid.

Image

Python Bindings (PyGObject)

Python language bindings for GNOME platform libraries.

Arjan announces

PyGObject 3.56.2 has been released. This release contains a few fixes:

  • Fix issue when do_dispose is called while the garbage collector is running.
  • retain object floating state for get-/set-property calls.

As always, the latest version is available on PyPI and the GNOME download server.

GNOME Circle Apps and Libraries

Sophie (she/her) says

As you may already have learned from the GNOME 50 release notes, Sessions has been accepted into GNOME Circle.

Sessions is a simple visual timer application designed specifically for the pomodoro technique. The app is maintained by Felicitas Pojtinger.

Image

Warp

Fast and secure file transfer.

Fina reports

Warp 1.0 has been released, finally breaking the light speed barrier. New features include a new shortcuts dialog, runtime and translation updates. Engage!

Image

Video Trimmer

Trim videos quickly.

YaLTeR reports

I released Video Trimmer 26.03 with an improvement suggested by one of the users: the prefilled filename in the save dialog now includes the trimming timestamps. This way, there are no filename conflicts when extracting several fragments from a video.

I also added several CLI flags to pre-set the start and end timestamp, and the precise trim and remove audio options.

Image

Identity

Compare images and videos.

YaLTeR reports

Identity 26.03 is out with a new time display when hovering the mouse over the video seek bar. I also added Ctrl+2..9 hotkeys to set the zoom level from 200% to 900%.

The window title now shows the current filename, which is helpful with many open tabs. Finally, you can pass the initial --zoom and --display mode on the command line.

Third Party Projects

Haydn Trowell reports

The latest version of Typesetter, the minimalist Typst editor, brings:

  • Built-in, automatic grammar checking (currently English only).
  • Tooltips for Typst errors and warnings in the editor.
  • Keyboard shortcuts for navigating spelling errors.
  • New translations: Czech (p-bo), Dutch (flipflop97), Finnish (Jiri Grönroos), Polish (michalfita), Swedish (haaninjo), and Vietnamese (namthien).

Get it on Flathub: https://flathub.org/apps/net.trowell.typesetter

If you want to help bring Typesetter to your language, translations can be contributed via Weblate: https://translate.codeberg.org/engage/typesetter/

Andrea Fontana announces

Hideout is a simple, GTK-based encryption tool written in D, designed specifically for non-technical users who need to password-protect their files without complexity. It follows GNOME’s design principles to provide a clean and intuitive experience. On Flathub: https://flathub.org/apps/it.andreafontana.hideout

Jeffry Samuel reports

Nocturne has been released, it allows users to manage their local music libraries with optional Navidrome / Subsonic integration. It includes features such as:

  • Playlists
  • Automatic lyrics fetching
  • Play queue managing
  • Album and artist sorting
  • Fast searching

For more information visit the website or repository

https://jeffser.com/nocturne/ https://github.com/Jeffser/Nocturne

Image

Image

Image

Anton Isaiev says

RustConn (connection manager for SSH, RDP, VNC, SPICE, Telnet, Serial, Kubernetes, MOSH, and Zero Trust protocols)

Versions 0.10.3–0.10.8 landed this week with changes driven entirely by user feedback:

  • Security: RDP passwords no longer exposed in /proc; SSH agent passphrase files are zeroized before deletion; legacy XOR credentials migrated to AES-256-GCM transparently
  • Embedded viewer performance: eliminated per-frame pixel buffer allocations (8–33 MB depending on resolution) for SPICE, VNC, and RDP by switching to persistent Cairo surfaces with in-place updates; RDP frame extraction now uses row-based memcpy + bulk SIMD-friendly R↔B swap
  • HiDPI fixes: resolved blurry/artifact RDP rendering on HiDPI displays caused by double-scaling; fixed cursor artifacts from transparent padding bleed on scaled displays
  • Flatpak sandbox: Zero Trust CLIs (gcloud, Azure, Teleport, OCI) now work correctly by redirecting config paths to writable sandbox directories; fixed CLI detection using extended PATH
  • KeePassXC integration: fixed all vault operations failing when KDBX file is password-protected (password was passed as None in 10 call sites)
  • Passbolt CLI 0.4.2 compatibility: fixed deserialization failures from field naming changes
  • Highlight rules: built-in defaults (ERROR, WARNING, CRITICAL, FATAL) now always apply, not just when per-connection rules exist
  • Code quality: shared CairoBackedBuffer module, deduplicated regex compilations, extracted parse_protocol_type() to eliminate 3 duplicate implementations

Thank you for the growing interest in RustConn. All of this work is driven purely by user feedback - every bug report and feature request shapes the project. I reached what I considered “my ideal” months ago, but it turns out users know better. The result is an open-source connection manager that, in my honest opinion, is now more capable and convenient than its commercial competitors - built by engineers, for engineers.

A special thanks to the community members who package RustConn for AUR and other distribution repositories, and to those who ported it to FreeBSD. Seeing people take the time to bring RustConn to new platforms is the strongest signal that the project fills a real need.

Constructive feedback is always welcome: https://github.com/totoshko88/RustConn/issues Project: https://github.com/totoshko88/RustConn Flatpak: https://flathub.org/en/apps/io.github.totoshko88.RustConn

Image

xjuan reports

Cambalache’s First Major Milestone!

After more than 5 years, 1780 commits and 20k lines of handcrafted, artisanal Python code I am very pleased to announce Cambalache 1.0 !!!

Cambalache is a WYSIWYG (What You See Is What You Get) tool that allows you to create and edit user interfaces for Gtk 4 and 3 applications.

Read more about it at https://blogs.gnome.org/gtk/2026/03/20/cambalaches-first-major-milestone/

Image

Solitaire

Play Patience Games

Will Warner announces

Solitaire is a new app to play paitence games! It has been about a year since I started working on this, and I am excited to say that Solitaire is now avalible on Flathub. Solitaire has a solver that will tell you if the game you are playing has become impossible to win, and provides hints that are guaranteed to lead to a win. The app features six games: Klondike, FreeCell, Tri Peaks, Spider, Pyramid, and Yukon. Solitaire will also keep track of your scores, using moves or time based scoring. It even lets you change what the cards look like, with seven card themes to choose from.

Image

Shell Extensions

sri 🚀 says

GNOME Shell extensions reviews have become delayed due to our main reviewer being cut off from the Internet. The backlog is getting long and while some community members have stepped up the progress is slow. Much appreciation to those who are stepping up. Please be aware that the review delay means that extensions being updated to GNOME 50 are being delayed.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Lennart Poettering: Mastodon Stories for systemd v260

26 March 2026 at 23:00

On March 17 we released systemd v260 into the wild.

In the weeks leading up to that release (and since then) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd260 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 21 posts:

I intend to do a similar series of serieses of posts for the next systemd release (v261), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

My series for v261 will begin in a few weeks most likely, under the #systemd261 hash tag.

In case you are interested, here is the corresponding blog story for systemd v259, here for v258, here for v257, and here for v256.

Andy Wingo: free trade and the left, quater: witches

26 March 2026 at 22:03

Good evening. Tonight, we wrap up our series on free trade and the left. To recap where we were, I started by retelling the story that free trade improves overall productivity, but expressed reserves about the way in which it does so: plant closures and threats thereof, regulatory arbitrage, and so on. Then we went back in history, discussing the progressive roots of free trade as a cause of the peace-and-justice crowd, in the 19th century. Then we looked at the leading exponents of free trade in the 20th century, the neoliberals , ending in an odd place: instead of free trade being a means for the end of peace and prosperity, neoliberalism turns this on its head, instead holding that war, immiseration, apartheid, dictatorship, ecological disaster, all are justified if they serve the ends of the “free market”, of which free trade is a component.

When I make this list of evils I find myself back in 1999, that clearly “we” were right then to shut down the WTO meetings in Seattle. With the distance of time, I start to wonder, not about then, but about now: for all the evil of our days, Trump at least has the virtue of making clear that trade barriers have a positive dot-product with acts of war. As someone who lives in the banlieue of Geneva, I am always amused when I find myself tut-tutting over the defunding of this or that institution of international collaboration.

I started this series by calling out four works. Pax Economica and Globalists have had adequate treatment. The third, Webs of Power, by Starhawk, is one that I have long seen as a bit of an oddball; forgive my normie white boy (derogatory) sensibilities, but I have often wondered how a book by a voice of “earth-based spirituality and Goddess religion” has ended up on my shelf. I am an atheist. How much woo is allowed to me?

choice of axiom

Conventional wisdom is to treat economists seriously, and Wiccans less so. In this instance, I have my doubts. The issue is that a neoliberal is at the same time a true believer in markets, and a skilled jurist. In service of the belief, any rhetorical device is permissible, if it works; if someone comes now and tries to tell me that the EU-Mercosur agreement is a good thing because of its effect on capybara populations, my first reaction is to doubt them, because maybe they are a neoliberal, and if so they would literally say anything.

Whereas if Starhawk has this Earth-mother-spiritual vibe... who am I to say? Yes, I think religion on the whole is a predatory force on vulnerable people, but that doesn’t mean that her interpretation of the web of life as divine is any less legitimate than neoliberal awe of the market. Let’s hear her argument and get on with things.

Starhawk’s book has three parts. The first is an as-I-lived-it chronicle, going from Seattle to Washington to Prague to Quebec City to Genoa, and thence to 9/11 and its aftermath, describing what it was like to be an activist seeking to disrupt the various WTO-adjacent meetings, seeking to build something else. She follows this up with 80 pages of contemporary-to-2002 topics such as hierarchy within the movement, nonviolence vs black blocs, ecological principles, cultural appropriation, and so on.

These first two sections inform the last final 20 pages, in which Starhawk attempts to synthesize what it is that “we” wanted, as a kind of memento and hopefully a generator of actions to come. She comes up with a list of nine principles, which I’ll just quote here because I don’t have an editor (the joke’s on all of us!):

  1. We must protect the viability of the life-sustaining systems of the planet, which are everywhere under attack.
  2. A realm of the sacred exists, of things too precious to be commodified, and must be respected.
  3. Communities must control their own resources and destinies.
  4. The rights and heritages of indigenous communities must be acknowledged and respected.
  5. Enterprises must be rooted in communities and be responsible to communities and to future generations.
  6. Opportunity for human beings to meet their needs and fulfill their dreams and aspirations should be open to all.
  7. Labor deserves just compensation, security, and dignity.
  8. The human community has a collective responsibility to assure the basic means of life, growth, and development for all its members.
  9. Democracy means that all people have a voice in the decisions that affect them, including economic decisions.

Now friends, this is Starhawk’s list, not mine, and a quarter-century-old list at that. I’m not here to judge it, though I think it’s not bad; what I find interesting is its multifaceted nature, that when contrasted with the cybernetic awe of late neoliberalism, that actually it’s the Witch who has the more down-to-earth concerns: a planet to live on, a Rawlsian concern with justice, and a control of the economic by the people.

which leaves us

Former European Central Bank president Mario Draghi published a report some 18 months ago diagnosing a European malaise and proposing a number of specific remedies. I find that we on my part of the left are oft ill-equipped to engage with the problem he identifies, not to mention the solutions. The whole question of productivity is very technical, to the extent that we might consider it owned by our enemies: our instinct is to deflect, “productivity for what”, that sort of thing. Worse, if we do concede the problem, we haven’t spent as much time sparring in the gyms of comparative advantage; we risk a first-round knockout. We come with Starhawk’s list in hand, and they smile at us condescendingly: “very nice but we need to focus on the economy, you know,” and we lose again.

But Starhawk was not wrong. We do need a set of principles that we can use to analyze the present and plot a course to the future. I do not pretend to offer such a set today, but after having looked into the free trade question over the last couple months, I have reached two simple conclusions, which I will share with you now.

The first is that, from an intellectual point of view, we should just ignore the neoliberals; they are not serious people. That’s not a value judgment on the price mechanism, but rather one on those that value nothing else: that whereas classical liberalism was a means to an end, neoliberalism admits no other end than commerce, and admits any means that furthers its end. And so, we can just ignore them. If neoliberals were the only ones thinking about productivity, well, we might need new branches of economics. Fortunately that’s not the case. Productivity is but one dimension of the good, and it is our collective political task to choose a point from the space of the possible according to our collective desires.

The second conclusion is that we should take back free trade from our enemies on the right. We are one people, but divided into states by historical accident. Although there is a productivity argument for trade, we don’t have to limit ourselves to it: the bond that one might feel between Colorado and Wyoming should be the same between Italy and Tunisia, between Canada and Mexico, indeed between France and Brasil. One people, differentiated but together, sharing ideas and, yes, things. Internationalism, not nationalism.

There is no reason to treat free trade as the sole criterion against which to judge a policy. States are heterogeneous: what works for the US might not be right for Haiti; states differ in the degree that they internalize environmental impacts; and they differ as regards public services. We can take these into account via policy, but our goal should be progress for all.

So while Thomas Piketty is right to decry a kind of absolutism among European decisionmakers regarding free trade, I can’t help but notice a chauvinist division being set up in the way we leftists are inclined to treat these questions: we in Europe are one bloc, despite e.g. very different carbon impacts of producing a dishwasher in Poland versus Spain, whereas a dishwasher from China belongs to a different, worse, more sinful category.

and mercosur?

To paraphrase Marley’s ghost, mankind is my business. I want an ever closer union with my brothers and sisters in Uruguay and Zambia and Cambodia and Palestine. Trade is a part of it. All things being equal, we should want to trade with Chile. We on the left should not oppose free trade with Mercosur out of a principle that goods produced far away are necessarily a bad thing.

All this is not to say that we should just doux it (although, gosh, Karthik is such a worthy foe); we can still participate in collective carrot-and-stick exercises such as carbon taxes and the like, and this appreciation of free trade would not have trumped the campaign to boycott apartheid South Africa, nor would it for apartheid Israel. But our default position should be to support free trade with Mercosur, in such a way that does improves the lot of all humanity.

I don’t know what to think about the concrete elements of the EU-Mercosur deal. The neoliberal play is to design legal structures that encase commerce, and a free trade deal risks subordinating the political to the economic. But unlike some of my comrades on the left, I am starting to think that we should want free trade with Bolivia, and that’s already quite a change from where I was 25 years ago.

fin

Emily Saliers famously went seeking clarity; I fear I have brought little. We are still firmly in the world of the political, and like Starhawk, still need a framework of pre-thunk thoughts to orient us when some Draghi comes with a new four-score-page manifesto. Good luck and godspeed.

But it is easier to find a solution if we cull the dimensionality of the problem. The neoliberals had their day, but perhaps these staves may be of use to you in exorcising their discursive domination; it is time we cut them off. Internationalist trade was ours anyway, and it should resume its place as a means to our ends.

And what ends? As with prices, we discover them on the margin, in each political choice we make. Some are easy; some less so. And while a list like Starhawk’s is fine enough, I keep coming back to a simpler question: which side are you on? The sheriff or the union? ICE or the immigrant? Which side are you on? The question cuts fine. For the WTO in Seattle, to me it said to shut it all down. For EU-Mercosur, to me it says, “let’s talk.”

L’alternative

By:Gee
26 March 2026 at 07:29

En ce moment, Gee a décidé de reprendre les bases… tout simplement parce que ce qui peut nous sembler évident, à nous libristes, ne l’est pas forcément pour tout le monde.

L’alternative

💡 Les logiciels libres ont la fâcheuse tendance à être présentés via un équivalent propriétaire : c’est le fameux « X est une alternative libre à Y ».

Un mec lambda demande à la Geekette, qui est derrière son ordi : « Tu connais une alternative libre à Photoshop ? » La Geekette : « GIMP ou Krita. » « Une alternative à Whatsapp ? » « Signal. » « À Youtube ? » « Peertube. » « À ChatGPT ? » « Ton cerveau. »

Alors heureusement, pas toujours.

Parfois, un logiciel libre est déjà leader dans son domaine.

Bizarrement, alors, il ne se passe jamais ça :

Un homme demande à une femme : « Dis, tu connaîtrais pas une alternative propriétaire et irrespectueuse de la vie prévie à OBS Studio* ? » La femme est dubitative.

OBS Studio est le logiciel de référence pour la diffusion de vidéo en direct, et il est libre. L’image ci-dessus a été réalisée pour illustrer un édito du Lama déchaîné que j’avais écrit sur ce sujet en décembre 2025.

⚠️ Mais même sans cela, il faudrait déjà ne pas se méprendre sur ce qu’on entend par « alternative ».

Le mec lambda, pas content, regarde son téléphone : « Enfin Peertube, y'a pas de monétisation, c'est nul. » La Geekette : « J'ai dit que c'était une *alternative*, pas un *équivalent*. Youtube, c'est une chaîne de télé, avec une éditorialisation et des annonceurs, Peertube ça fait juste de la vidéo décentralisée. Mais ça le fait bien. »

▶️ Un exemple qu’on prend souvent, c’est le vélo comme alternative à la voiture. Est-ce que le vélo est un moyen de transport équivalent à la voiture ? Évidemment que non.

Gee, sur un vélo, dit : « C'est sûr qu'on va pas faire Paris-Marseille en vélo… Mais avec une refonte profonde de l'aménagement du territoire complétée par des transports en commun, le vélo pourrait remplacer la voiture comme moyen de transport pour particulier par défaut. » Le smiley : « Ah ouais mais si on prend en compte le contexte, c'est pas du jeu. »

▶️ Car oui, la voiture et le vélo sont bien deux moyens de transport pour particulier, deux alternatives à la problématique consistant à se déplacer plus vite qu’à pied. Selon qu’on favorise l’un ou l’autre, on ne construit pas les mêmes villes et les mêmes territoires.

Un mec montre un schéma et dit : « Ouais OK, m'enfin la bagnole, c'est quand même plus rapide, plus confortable, ça va plus loin… » La Geekette en montre un autre et dit : « Et le vélo plus écologique, moins bruyant, ça tue moins et c'est soutenable à long terme…  Maintenant qu'on a vu les avantages et inconvénients de chaque, on avance ? »

⚠️ En effet, juger une alternative selon les standards de l’alternative dominante, c’est la condamner d’avance à paraître médiocre.

On voit un robot flambant neuf, brillant. Un homme ressemblant à Steve Jobs dit : « Notre iRobot a l'interface la plus léchée du marché, il s'interface parfaitement bien avec votre iPhone/iPad/iProut et il fait aussi le café et la vaisselle. »

Alors, qu’en vrai, si on commence à prendre d’autres critères, le libre éclate souvent le propriétaire à plates coutures…

La Geekette dit : « Voici Dédé le robot. Il est un peu moche mais il est bon marché, accessible, il ne vous espionne pas, il fait un seul truc mais il le fait bien. Et surtout, il vous fout la paix. » Smiley : « Ouais, en gros, il est moche, quoi. » Une flèche indique que le smiley est un commentateur moyen.

Bref, « alternative » n’est pas « équivalence »…

⚠️ Mais « alternative » ne veut pas non plus dire « concurrence » ! Parce qu’être en concurrence avec quelque chose, ça veut déjà dire se placer sur le même segment, avec les mêmes ambitions.

Et spoiler : souvent, c’est pas le but recherché.

Un type tout content montre un chat en disant : « Je vous présente le nouveau membre des CHATONS* ! Il propose du service web libre et décentralisé ! Vous pouvez même le papouiller ! » Un type en costard regarde d'un air hautain et dit : « Mouais, bah c'est pas avec ça que vous allez concurrencer Google et cie, hein. Bande de babos. »

« Collectifs des Hébergeurs Alternatifs, Transparents, Ouverts, Neutres et Solidaires », voir aussi chatons.org.

▶️ Et en effet, si on présente souvent les CHATONS comme des « AMAP » du numérique, il faut bien comprendre qu’une AMAP est une alternative à un supermarché, mais qu’elle ne lui fait pas concurrence.

Un type de chez Carrefour dit : « Vous êtes mignons avec vos assos de paysans, mais nous chez Carrefour, on fait une distribution de masse que vous ne pourrez jamais faire. » Une paysanne dit : « OK. Ben nous, on fait dans le commerce de proximité. » L'autre : « Ouais, bah c'est pas comme ça que vous devriendrez une multinationale capitalisée en milliards. » Le paysan : « C'est pas le but recherché, en fait. »

« Associations pour le Maintien d’une Agriculture Paysanne », des structures qui permettent d’acheter régulièrement des produits alimentaires directement auprès de producteurs, pour une alimentation locale et de qualité, qui ne passe pas par la grande distribution.

💡 Quand on dit que Peertube est une alternative à Youtube, et qu’on nous répond que ça ne sera jamais un concurrent crédible…

Sepia, la mascotte de Peertube, dit : « Tant mieux ! J'ai pas la moindre envie d'un deuxième Youtube, moi, je veux juste faire autre chose. » Le logo de Youtube : « De kouwa ?! Tout le monde ne veut pas me ressembler à mouwa ?! »

▶️ Parce qu’en fait, le monde ne serait pas meilleur si Youtube était un logiciel libre… il serait meilleur si nous ne nous faisions pas constamment bouffer le cerveau par l’économie de l’attention*.

Le logo de Youtube : « Quand vous avez fini une vidéo sur Youtube, ça passe direct à la suivante. Peertube sait même pas faire ça, le nul ! » Sepia répond : « Ben si, on sait le faire. Mais on veut pas. » But de Youtube : vous faire bouffer du contenu le plus longtemps possible pour rendre votre cerveau disponible aux annonceurs. But de Peertube : vous donner un moyen de partager et de voir des vidéos de manière libre, décentralisée, sans dark pattern**, et sans dépendance à des multinationales du numérique qui cherchent juste à monétiser votre attention.

Voir ma dernière BD sur ce sujet.

✷✷ Un dark pattern est une interface utilisateur qui a été volontairement conçue pour tromper ou manipuler (dixit Wikipédia).

Alors vous allez me dire :

oui mais du coup, est-ce que c’est pas un peu vain, tout ça ?

⚠️ Si on considère que le capitalisme de surveillance est un problème, que l’hégémonie des GAFAM est un problème, et qu’on ne propose que des gouttes d’eaux propres dans un océan de merde, quel intérêt ?

Gee, avec une pelle, devant une tombe marquée « TINA — Ne manque à personne. », dit : « L'intérêt, c'est d'enterrer une bonne fois pour toutes cette saleté. » De la tombe, une voix dit : « Mais puisque je vous dit qu'IL N'Y A PAS D'ALTERNATIVEUUUH ! »

▶️ La nature a horreur du vide, et lorsqu’un géant du numérique vacille, alors quelque chose d’autre prend la place laissée vacante.

Alors autant préparer un « quelque chose » de chouette.

Le mec lambda : « Oh non ! Mon réseau social est devenu une chambre d'écho pour un techofasciste mégalomane ! » Un mastodon lui répond : « Bah viens à la maison, on est bien. Bon c'est un peu plus compliqué que chez le facho, mais c'est aussi plus tranquille. » Mastodon, alternative libre et décentralisée à Twitter/X.

💡 C’est d’ailleurs même souvent dans les moments les plus sombres que se préparent des réalités qui paraissaient jusqu’alors utopiques.

Un type montre un document appelé « Les Jours heureux » en disant : « Bon, on a des nazis qui défilent dans nos rues, des collabos au pouvoir et c'est la guerre partout… mais est-ce que ce serait pas le moment d'inventer le meilleur système de sécurité sociale au monde* ? » Gee, surpris : « Ah la vache, j'avais pas pigé que c'était un flashback des années 40, au début… »

Voir « Les Jours heureux », adopté comme programme du Conseil national de la Résistance en 1944, et qui a mené, entre autres, à la création de la Sécu après la guerre.

▶️ Et si nos ancêtres ont eu assez d’espoir sous l’Occupation pour préparer des lendemains meilleurs, ce n’est pas l’hégémonie GAFAM et les nazillons de pacotille qu’on honore à l’Assemblée qui doivent nous empêcher de construire, nous aussi, nos idéaux pour l’avenir.

Le mec lambda s'en va : « Tsss, le type à deux doigts de prétendre qu'un autre monde est possible… utopiste ! Ça me débecte, tiens. » Gee est blasé à côté. Note : BD sous licence CC BY SA (grisebouille.net), dessinée le 16 mars 2026 par Gee.

Crédit : Gee (Creative Commons By-Sa)

Firefox Developer Edition and Beta: Try out Mozilla’s .rpm package!

In January, we introduced our Nightly package for RPM-based Linux distributions. Today, we are thrilled to announce it is now available for Firefox Beta!

Firefox Beta is great for testing your sites in a version of Firefox that will reach regular users in the coming weeks. If you find any issues, please file them on Bugzilla.

Switching to Mozilla’s RPM repository allows Firefox Beta to be installed and updated like any other application, using your favorite package manager. It also provides a number of improvements:

  •  Better performance thanks to our advanced compiler-based optimizations,
  • Updates as fast as possible because the .rpm management is integrated into Firefox’s release process,
  • Hardened binaries with all security flags enabled during compilation,
  • No need to create your own .desktop file.

If you have Mozilla’s RPM repository already set up, you can simply install Firefox Beta with your package manager. Otherwise, follow the setup steps below.


If you are on Fedora (41+), or any other distribution using dnf5 as the package manager

 

sudo dnf config-manager addrepo --id=mozilla --set=baseurl=https://packages.mozilla.org/rpm/firefox --set=gpgkey=https://packages.mozilla.org/rpm/firefox/signing-key.gpg --set=gpgcheck=1 --set=repo_gpgcheck=0
sudo dnf makecache --refresh
sudo dnf install firefox-beta

Note: repo_gpgcheck=0 deactivate the signature of metadata with GPG. However, this is safeguarded instead by HTTPS and package signatures (gpgcheck=1).

If you are on openSUSE or any other distribution using zypper as the package manager

sudo rpm --import https://packages.mozilla.org/rpm/firefox/signing-key.gpg
sudo zypper ar --gpgcheck-allow-unsigned-repo https://packages.mozilla.org/rpm/firefox mozilla
sudo zypper refresh
sudo zypper install firefox-beta

For other RPM based distributions (RHEL, CentOS, Rocky Linux, older Fedora versions)

sudo tee /etc/yum.repos.d/mozilla.repo >  /dev/null << EOF
[mozilla]
name=Mozilla Packages
baseurl=https://packages.mozilla.org/rpm/firefox
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.mozilla.org/rpm/firefox/signing-key.gpg
EOF
# For dnf users
sudo dnf makecache --refresh
sudo dnf install firefox-beta
# For zypper users
sudo zypper refresh
sudo zypper install firefox-beta

The firefox-beta package will not conflict with your distribution’s Firefox package if you have it installed, you can have both at the same time!

Adding language packs

If your distribution language is set to a supported language, language packs for it should automatically be installed. You can also install them manually with the following command (replace fr with the language code of your choice):

sudo dnf install firefox-beta-l10n-fr

You can list the available languages with the following command:

dnf search firefox-beta-l10n

Don’t hesitate to report any problem you encounter to help us make your experience better.

The post Firefox Developer Edition and Beta: Try out Mozilla’s .rpm package! appeared first on Mozilla Hacks - the Web developer blog.

GNOME Foundation News: Introducing the GNOME Fellowship program

24 March 2026 at 12:26

Image

Sustaining GNOME by directly funding contributors

The GNOME Foundation is excited to announce the GNOME Fellowship program, a new initiative to fund community members working on the long-term sustainability of the GNOME project. We’re now accepting applications for our inaugural fellowship cycle, beginning around May 2026.

GNOME has always thrived because of its contributors: people who invest their time and expertise to build and maintain the desktop, applications, and platform that millions rely on. But open source contribution often depends on volunteers finding time alongside other commitments, or on companies choosing to fund development amongst competing priorities. Many important areas of the project – the less glamorous but critical infrastructure work – can go underinvested.

The fellowship program changes that. Thanks to the generous support of Friends of GNOME donors, we can now directly fund contributors to focus on what matters most for GNOME’s future. Programs such as this rely on ongoing support from our donors, so if you would like to see this and similar programs continue in future, please consider setting up a recurring donation.

What’s a Fellowship?

A fellowship is funding for an individual to spend dedicated time over a 12 month period working in an area where they have expertise. Unlike traditional contracts with rigid scopes and deliverables, fellowships are built on trust. We’re backing people and the type of work they do, giving them the flexibility to tackle problems as they find them.

This approach reduces bureaucratic overhead for both contributors and the Foundation. It lets talented people do what they do best: identify important problems and solve them.

Focus: Sustainability

For this first cycle, we’re seeking proposals focused on sustainability work that makes GNOME more maintainable, efficient, and productive for developers. This includes areas like build systems, CI/CD infrastructure, testing frameworks, developer tooling, documentation, accessibility, and reducing technical debt.

We’re not funding new features this round. Instead, we want to invest in the foundations that make future development and contributions easier and faster. The goal is for each fellowship to leave the project in better shape than we found it.

Apply Now

We have funding for at least one 12-month fellowship paid between $70,000 and $100,000 USD per year based on experience and location. Applicants can propose full-time, half-time work, or either – half-time proposals may allow us to support multiple fellows.

Applications are open to anyone with a track record in GNOME or relevant experience, with some restrictions due to US sanctions compliance. A GNOME Foundation Board committee will review applications and select fellows for this inaugural cycle.

Full details, application requirements, and FAQ are available at fellowship.gnome.org. Applications close on 20th April 2026.

Thank You to Friends of GNOME

This program is possible because of the individuals and organizations who support GNOME through Friends of GNOME donations. When we ask for donations, funding contributor work is exactly the kind of initiative we have in mind. If you’d like to sustain this program beyond its first year, consider becoming a Friend of GNOME. A recurring donation, no matter how small, gives us the predictability to expand this program and others like it.

Looking Ahead

This is a pilot program. We’re optimistic, and if it succeeds, we hope to sustain and grow the fellowship program in future years, funding more contributors across more areas of GNOME. We believe this model can become a sustainable way to invest in the project’s long-term health.

We can’t wait to see your proposals!

Framamèmes : une v2 financée par les soutiens de Gee !

By:Gee
24 March 2026 at 07:27

Aujourd’hui, Gee nous présente la nouvelle version de Framamèmes, financée grâce aux dons qu’il a reçu pour le financement participatif de son blog Grise Bouille !
Si vous ne connaissez pas encore Framamèmes, découvrez-le via l’article de blog de son lancement !

C’était le premier méta-palier du financement participatif du blog, et il vient d’être atteint ! Je suis très fier de vous présenter la toute nouvelle version du générateur de mèmes libre Framamèmes, avec tout plein de chouettes trucs au programme !

La v2, toute belle, toute neuve !

Sauts de ligne automatiques

On commence par la fonctionnalité vedette de cette v2 : le redimensionnement dynamique des textes.

Auparavant, pour organiser vos textes dans le mème, vous deviez ajouter les sauts de ligne « à la main » ; le redimensionnement n’était qu’une bête mise à l’échelle. À présent, le texte est automatiquement organisé dans la boîte englobante, et le redimensionnement s’effectue de manière dynamique : si vous rendez la boîte plus étroite, le texte sera séparé en plus de lignes, et inversement.

À l’usage, c’est beaucoup plus agréable, et ça vous permet même de ne plus avoir à redimensionner le texte du tout lorsque vous partez des modèles existants : ainsi, sur le mème avec les panneaux autoroutiers, les boîtes englobantes par défaut sont réglées pour remplir les panneaux au maximum sans dépasser.

Barre de recherche

Lorsque la v1 est sortie en avril 2025, Framamèmes ne comportait que 7 modèles de mèmes. Un an plus tard, c’est une vingtaine d’images qui sont disponibles… ce qui est plutôt très cool.

Le problème, c’est que ça devient pénible de faire défiler tous les mèmes pour trouver celui qu’on cherche… alors pour que ce soit un peu plus simple et rapide, j’ai ajouté un champ de recherche : si vous voulez trouver le mème issu des Simpson, il vous suffit de taper « sim » et, très vite, vous ne verrez plus que le mème recherché s’afficher !

Les termes de recherche sont les noms des mèmes en versions anglaises ainsi qu’en versions françaises, agrémentés de quelques mots-clefs logiques (comme « Star Wars » pour le mème d’Anakin et Padmé).

Paramétrages des textes

Jusqu’à maintenant, tous les textes utilisaient la même police, Anton, police libre choisie pour sa ressemblance avec Impact, la police traditionnelle utilisée pour les mèmes (mais non-libre). Au niveau du style, tous les textes étaient centrés, en majuscules et en blanc avec un contour noir de taille fixe.

Tout cela est maintenant paramétrable :

  • vous pouvez choisir entre 5 polices (Anton, une police sans empattements/serifs, une avec, une monospace et une de style « BD ») ;
  • le style (gras, italique, normal, majuscules) est sélectionnable ;
  • la couleur du texte, du contour, et la taille du contour sont réglables ;
  • l’alignement est également réglable.

Chacun de ces paramètres est applicable à chaque texte séparément, mais le style choisi pour un texte peut être appliqué à tous les autres d’un simple clic.

Notez que les modèles existants de mèmes ont été mis à jour lorsqu’ils utilisaient un autre style que celui qui était uniquement disponible auparavant.

Images à la place des textes

Vous vous souvenez du mème de The Office ou Pam, la réceptionniste, fait chercher la différence entre deux images et précise que « c’est la même image » ? Avouons que ce mème ne rendait pas très bien avec des textes dans chacun des deux panneaux censés contenir des images…

Eh bien vous pouvez désormais remplacer chaque texte par une image, ce qui sera très pratique pour ce mème en particulier, mais pas seulement ! Et comme on n’oublie pas l’accessibilité, la zone de texte correspondante servira maintenant à la description de l’image (qui sera elle-même utilisée dans l’alt-text généré automatiquement).

Et non, on ne propose pas d’IA pour générer la description automatiquement à partir de l’image : l’accessibilité, c’est avant tout une question de communication humaine, on n’va pas laisser ça à une machine.

Flux RSS

Vous l’avez compris, des mèmes sont régulièrement ajoutés à ce générateur. Si vous voulez avoir une notification lorsqu’un nouveau mème est disponible, j’ai créé un flux RSS qui sert juste à ça. Parce que les flux RSS, c’est bien, et ça ne coûte pas grand-chose de le faire.

Au passage, merci à anorax qui m’a suggéré l’idée sur Mastodon !

Des petites améliorations en plus

Vous pourrez aussi trouver un bouton pour déplier la liste des mèmes (si vous préférez ça au champ de recherche, pour vous y retrouver dans la liste), un focus automatique sur les zones de textes quand termine un déplacement (ou quand on ajoute un texte), etc.

De manière générale, j’essaie toujours d’ajouter les petites améliorations d’interface qu’on me suggère, surtout quand c’est simple à faire et que ça permet de rendre l’utilisation un peu plus fluide.

C’est grâce à vous !

Cette v2 de Framamèmes a été débloquée par un « métapalier » du financement participatif du blog Grise Bouille (un palier calculé sur la saison complète et pas juste sur un mois). Un immense merci à toutes les personnes qui ont participé et ont rendu cette v2 possible !

Je vous rappelle que c’est grâce à ce crowdfunding permanent que je peux aujourd’hui produire de l’art libre, alors n’oubliez pas de soutenir ! Rappel : au prochain méta-palier, à 9 000 €, je commence à écrire la suite de mon roman de fantasy gauchiste Sortilèges & Syndicats.

Framamèmes v3 ?

Je n’ai pas vraiment de plan pour l’avenir de Framamèmes. Je ne crois pas qu’un logiciel doive nécessairement recevoir continuellement des mises à jour, et je ne souhaite pas faire de Framamèmes une usine à gaz. L’interface de la v2 me semble un bon compromis entre simplicité et exhaustivité des fonctionnalités.

La seule « fonctionnalité » qui pourrait avoir du sens est la gestion de langues multiples (avec notamment une version anglaise) : j’ai vu tourner certains mèmes sur des sites anglophones, preuve que c’est déjà utilisé. En même temps, les anglophones ont déjà pléthore de générateurs dans leur langue, et ne font en général pas partie de mes mécènes qui restent largement francophones. Bref, je garde l’idée en tête, mais ce n’est pas prioritaire.

Je compte évidemment continuer à ajouter de nouveaux mèmes, j’ignore juste jusqu’à quand (le réservoir est virtuellement infini…). Intégrer des images dessinées par d’autres personnes n’est pas à l’ordre du jour, dans un souci de cohérence graphique. Notez que vous avez déjà la possibilité d’importer vos propres images si besoin. Rien ne vous empêche non plus de forker le projet pour faire votre propre instance avec vos propres images à partir du code de Framamèmes : comme tout ce que je fais, c’est libre !

Christian Schaller: Using AI to create some hardware tools and bring back the past

23 March 2026 at 16:07

As I talked about in a couple of blog posts now I been working a lot with AI recently as part of my day to day job at Red Hat, but also spending a lot of evenings and weekend time on this (sorry kids pappa has switched to 1950’s mode for now). One of the things I spent time on is trying to figure out what the limitations of AI models are and what kind of use they can have for Open Source developers.

One thing to mention before I start talking about some of my concrete efforts is that I more and more come to conclude that AI is an incredible tool to hypercharge someone in their work, but I feel it tend to fall short for fully autonomous systems. In my experiments AI can do things many many times faster than you ordinarily could, talking specifically in the context of coding here which is what is most relevant for those of us in the open source community.

So one annoyance I had for years as a Linux user is that I get new hardware which has features that are not easily available to me as a Linux user. So I have tried using AI to create such applications for some of my hardware which includes an Elgato Light and a Dell Ultrasharp Webcam.

I found with AI and this is based on using Google Gemini, Claude Sonnet and Opus and OpenAI codex, they all required me to direct and steer the AI continuously, if I let the AI just work on its own, more often than not it would end up going in circles or diverging from the route it was supposed to go, or taking shortcuts that makes wanted output useless.On the other hand if I kept on top of the AI and intervened and pointed it in the right direction it could put together things for me in very short time spans.
My projects are also mostly what I would describe as end leaf nodes, the kind of projects that already are 1 person projects in the community for the most part. There are extra considerations when contributing to bigger efforts, and I think a point I seen made by others in the community too is that you need to own the patches you submit, meaning that even if an AI helped your write the patch you still need to ensure that what you submit is in a state where it can be helpful and is merge-able. I know that some people feel that means you need be capable of reviewing the proposed patch and ensuring its clean and nice before submitting it, and I agree that if you expect your patch to get merged that has to be the case. On the other hand I don’t think AI patches are useless even if you are not able to validate them beyond ‘does it fix my issue’.

My friend and PipeWire maintainer Wim Taymans and I was talking a few years ago about what I described at the time as the problem of ‘bad quality patches’, and this was long before AI generated code was a thing. Wim response to me which I often thought about afterwards was “a bad patch is often a great bug report”. And that would hold true for AI generated patches to. If someone makes a patch using AI, a patch they don’t have the ability to code review themselves, but they test it and it fixes their problem, it might be a good bug report and function as a clearer bug report than just a written description by the user submitting the report. Of course they should be clear in their bug report that they don’t have the skills to review the patch themselves, but that they hope it can be useful as a tool for pinpointing what isn’t working in the current codebase.

Anyway, let me talk about the projects I made. They are all found on my personal website Linuxrising.org a website that I also used AI to update after not having touched the site in years.

Elgato Light GNOME Shell extension

Elgato Light GNOME Shell extension

Elgato Light GNOME Shell extension

The first project I worked on is a GNOME Shell extension for controlling my Elgato Key Wifi Lamp. The Elgato lamp is basically meant for podcasters and people doing a lot of video calls to be able to easily configure light in their room to make a good recording. The lamp announces itself over mDNS, and thus can be controlled via Avahi. For Windows and Mac the vendor provides software to control their lamp, but unfortunately not for Linux.

There had been GNOME Shell extensions for controlling the lamp in the past, but they had not been kept up to date and their feature set was quite limited. Anyway, I grabbed one of these old extensions and told Claude to update it for latest version of GNOME. It took a few iterations of testing, but we eventually got there and I had a simple GNOME Shell extension that could turn the lamp off and on and adjust hue and brightness. This was a quite straightforward process because I had code that had been working at some point, it just needed some adjustments to work with current generation of GNOME Shell.

Once I had the basic version done I decided to take it a bit further and try to recreate the configuration dialog that the windows application offers for the full feature set which took me quite a bit of back and forth with Claude. I found that if I ask Claude to re-implement from a screenshot it recreates the functionality of the user interface first, meaning that it makes sure that if the screenshot has 10 buttons, then you get a GUI with 10 buttons. You then have to iterate both on the UI design, for example telling Claude that I want a dark UI style to match the GNOME Shell, and then I also had to iterate on each bit of functionality in the UI. Like most of the buttons in the UI didn’t really do anything from the start, but when you go back and ask Claude to add specific functionality per button it is usually able to do so.

Elgato Light Settings Application

Elgato Light Settings Application

So this was probably a fairly easy thing for the AI because all the functionality of the lamp could be queried over Avahi, there was no ‘secret’ USB registers to be set or things like that.
Since the application was meant to be part of the GNOME Shell extension I didn’t want to to have any dependency requirements that the Shell extension itself didn’t have, so I asked Claude to make this application in JavaScript and I have to say so far I haven’t seen any major differences in terms of the AIs ability to generate different languages. The application now reproduce most of the functionality of the Windows application. Looking back I think it probably took me a couple of days in total putting this tool together.

Dell Ultrasharp Webcam 4K

Dell UltraSharp 4K settings application for Linux

Dell UltraSharp 4K settings application for Linux

The second application on the list is a controller application for my Dell UltraSharp Webcam 4K UHD (WB7022). This is a high end Webcam I that have been using for a while and it is comparable to something like the Logitech BRIO 4K webcam. It has mostly worked since I got it with the generic UVC driver and I been using it for my Google Meetings and similar, but since there was no native Linux control application I could not easily access a lot of the cameras features. To address this I downloaded the windows application installer and installed it under Windows and then took a bunch of screenshots showcasing all features of the application. I then fed the screenshots into Claude and told it I wanted a GTK+ version for Linux of this application. I originally wanted to have Claude write it in Rust, but after hitting some issues in the PipeWire Rust bindings I decided to just use C instead.

I took me probably 3-4 days with intermittent work to get this application working and Claude turned out to be really good and digging into Windows binaries and finding things like USB property values. Claude was also able to analyze the screenshots and figure out the features the application needed to have. It was a lot of trial and error writing the application, but one way I was able to automate it was by building a screenshot option into the application, allowing it to programmatically take screenshots of itself. That allowed me to tell Claude to try fixing something and then check the screenshot to see if it worked without me having to interact with the prompt. Also to get the user interface looking nicer, once I had all the functionality in I asked Claude to tweak the user interface to follow the guidelines of the GNOME Human Interface Guidelines, which greatly improved the quality of the UI.

At this point my application should have almost all the features of the Windows application. Since it is using PipeWire underneath it is also tightly integrated with the PipeWire media graph, allowing you to see it connect and work with your application in PipeWire patchbay applications like Helvum. The remaining features are software features of Dell’s application, like background removal and so on, but I think that if I decided to to implement that it should be as a standalone PipeWire tool that can be used with any camera, and not tied to this specific one.

Red Hat Planet

Red Hat Vulkan Globe

The application shows the worlds Red Hat offices and include links to latest Red Hat news.


The next application on my list is called Red Hat Planet. It is mostly a fun toy, but I made it to partly revisit the Xtraceroute modernisation I blogged about earlier. So as I mentioned in that blog, Xtraceroute while cute isn’t really very useful IMHO, since the way the modern internet works rarely have your packets jump around the world. Anyway, as people pointed out after I posted about the port is that it wasn’t an actual Vulkan application, it was a GTK+ application using the GTK+ Vulkan backend. The Globe animation itself was all software rendered.

I decided if I was going to revisit the Vulkan problem I wanted to use a different application idea than traceroute. The idea I had was once again a 3D rendered globe, but this one reading the coordinates of Red Hats global offices from a file and rendering them on the globe. And alongside that provide clickable links to recent Red Hat news items. So once again maybe not the worlds most useful application, but I thought it was a cute idea and hopefully it would allow me to create it using actual Vulkan rendering this time.

Creating this turned out to be quite the challenge (although it seems to have gotten easier since I started this effort), with Claude Opus 4.6 being more capable at writing Vulkan code than Claude Sonnet, Google Gemini or OpenAI Codex was when I started trying to create this application.
When I started this project I had to keep extremely close tabs on the AI and what is was doing in order to force it to keep working on this as a Vulkan application, as it kept wanting to simplify with Software rendering or OpenGL and sometimes would start down that route without even asking me. That hasn’t happened more recently, so maybe that was a problem of AI of 5 Months ago.

I also discovered as part of this that rendering Vulkan inside a GTK4 application is far from trivial and would ideally need the GTK4 developers to create such a widget to get rendering timings and similar correct. It is one of the few times I have had Claude outright say that writing a widget like that was beyond its capabilities (haven’t tried again so I don’t know if I would get the same response today). So I started moving the application to SDL3 first, which worked as I got a spinning globe with red dots on, but came with its own issues, in the sense that SDL is not a UI toolkit as such. So while I got the globe rendered and working the AU struggled badly with the news area when using SDL.

So I ended up trying to port the application to Qt, which again turned out to be non-trivial in terms of how much time it took with trial and error to get it right. I think in my mind I had a working globe using Vulkan, how hard could it be to move it from SDL3 to Qt, but there was a million rendering issues. In fact I ended up using the Qt Vulkan rendering example as a starting point in the end and then ‘porting’ the globe over bit by bit, testing it for each step, to finally get a working version. The current version is a Vulkan+Qt app and it basically works, although it seems the planet is not spinning correctly on AMD systems at the moment, while it seems to work well on Intel and NVIDIA systems.

WMDock

WmDock fullscreen with config application

WmDock fullscreen with config application.


This project came out of a chat with Matthias Clasen over lunch where I mused about if Claude would be able to bring the old Window Maker dockapps to GNOME and Wayland. Turns out the answer is yes although the method of doing so changed as I worked on it.

My initial thought was for Claude to create a shim that the old dockapps could be compiled against, without any changes. That worked, but then I had a ton of dockapps showing up in things like the alt+tab menu. It also required me to restart my GNOME Shell session all the time as I was testing the extension to house the dockapps. In the end I decided that since a lot of the old dockapps don’t work with modern Linux versions anyway, and thus they would need to be actively ported, I should accept that I ship the dockapps with the tool and port them to work with modern linux technologies. This worked well and is what I currently have in the repo, I think the wildest port was porting the old dockapp webcam app from V4L1 to PipeWire. Although updating the soundcontroller from ESD to PulesAudio was also a generational jump.

XMMS resuscitated

XMMS brought back to life

XMMS brought back to life


So the last effort I did was reviving the old XMMS media player. I had tried asking Claude to do this for Months and it kept failing, but with Opus 4.6 it plowed through it and had something working in a couple of hours, with no input from me beyond kicking it off. This was a big lift,moving it from GTK2 and Esound, to GTK4, GStreamer and PipeWire. One thing I realized is that a challenge with bringing an old app back is that since keeping the themeable UI is a big part of this specific application adding new features is a little kludgy. Anyway I did set it up to be able to use network speakers through PipeWire and also you can import your Spotify playlists and play those, although you need to run the Spotify application in the background to be able to play sound on your local device.

Monkey Bubble
Monkey Bubble game
Monkey Bubble was a game created in the heyday of GNOME 2 and while I always thought it was a well made little game it had never been updated to never technologies. So I asked Claude to port it to GTK4 and use GStreamer for audio.This port was fairly straightforward with Claude having little problems with it. I also asked Claude to add highscores using the libmanette library and network game discovery with Avahi. So some nice little.improvements.

All the applications are available either as Flatpaks or Fedora RPMS, through the gitlab project page, so I hope people enjoy these applications and tools. And enoy the blasts from the past as much as I did.

Worries about Artifical Intelligence

When I speak to people both inside Red Hat and outside in the community I often come across negativity or even sometimes anger towards Artificial Intelligence in the coding space. And to be clear I to worry about where things could be heading and how it will affect my livelihood too, so I am not unsympathetic to those worries at all. I probably worry about these things at least a few times a day. At the same time I don’t think we can hide from or avoid this change, it is happening with or without us. We have to adapt to a world where this tool exists, just like our ancestors have adapted to jobs changing due to industrialization and science before. So do I worry about the future, yes I do. Do I worry about how I might personally get affected by this? yes, I do. Do I worry about how society might change for the worse due to this? yes, I do. But I also remind myself that I don’t know the future and that people have found ways to move forward before and society has survived and thrived. So what I can control is that I try to be on top of these changes myself and take advantage of them where I can and that is my recommendation to the wider open source community on this too. By leveraging them to move open source forward and at the same time trying to put our weight on the scale towards the best practices and policies around Artificial Intelligence.

The Next Test and where AI might have hit a limit for me.

So all these previous efforts did teach me a lot of tricks and helped me understand how I can work with an AI agent like Claude, but especially after the success with the webcam I decided to up the stakes and see if I could use Claude to help me create a driver for my Plustek OpticFilm 8200i scanner. So I have zero backround in any kind of driver development and probably less than zero in the field of scanner driver specifically. So I ended up going down a long row of deadends on this journey and I to this day has not been able to get a single scan out of the scanner with anything that even remotely resembles the images I am trying to scan.

My idea was to have Claude analyse the Windows and Mac driver and build me a SANE driver based on that, which turned out to be horribly naive and lead nowhere. One thing I realized is that I would need to capture USB traffic to help Claude contextualize some of the findings it had from looking at the Windows and Mac drivers.I started out with Wireshark and feeding Claude with the Wireshark capture logs. Claude quite soon concluded that the Wireshark logs wasn’t good enough and that I needed lower level traffic capture. Buying a USB packet analyzer isn’t cheap so I had the idea that I could use one of the ARM development boards floating around the house as a USB relay, allowing me to perfectly capture the USB traffic. With some work I did manage to set up my LibreComputer Solitude AML-S905D3-CC arm board going and setting it in device mode. I also had a usb-relay daemon going on the board. After a lot of back and forth, and even at one point trying to ask Claude to implement a missing feature in the USB kernel stack, I realized this would never work and I ended up ordering a Beagle USB 480 USB hardware analyzer.

At about the same time I came across the chipset documentation for the Genesys Logic GL845 chip in the scanner. I assumed that between my new USB analyzer and the chipset docs this would be easy going from here on, but so far no. I even had Claude decompile the windows driver using ghidra and then try to extract the needed information needed from the decompiled code.
I bought a network controlled electric outlet so that Claude can cycle the power of the scanner on its own.

So the problem here is that with zero scanner driver knowledge I don’t even know what I should be looking for, or where I should point Claude to, so I keept trying to brute force it by trial and error. I managed to make SANE detect the scanner and I managed to get motor and lamp control going, but that is about it. I can hear the scanner motor running and I ask for a scan, but I don’t know if it moves correctly. I can see light turning on and off inside the scanner, but I once again don’t know if it is happening at the correct times and correct durations. And Claude has of course no way of knowing either, relying on me to tell it if something seems like it has improved compared to how it was.

I have now used Claude to create two tools for Claude to use, once using a camera to detect what is happening with the light inside the scanner and the other recording sound trying to compare the sound this driver makes compared to the sounds coming out when doing a working scan with the MacOS X application. I don’t know if this will take me to the promised land eventually, but so far I consider my scanner driver attempt a giant failure. At the same time I do believe that if someone actually skilled in scanner driver development was doing this they could have guided Claude to do the right things and probably would have had a working driver by now.

So I don’t know if I hit the kind of thing that will always be hard for an AI to do, as it has to interact with things existing in the real world, or if newer versions of Claude, Gemini or Codex will suddenly get past a threshold and make this seem easy, but this is where things are at for me at the moment.

Jussi Pakkanen: Everything old is new again: memory optimization

23 March 2026 at 14:06

At this point in history, AI sociopaths have purchased all the world's RAM in order to run their copyright infringement factories at full blast. Thus the amount of memory in consumer computers and phones seems to be going down. After decades of not having to care about memory usage, reducing it has very much become a thing.

Relevant questions to this state of things include a) is it really worth it and b) what sort of improvements are even possible. The answers to these depend on the task and data set at hand. Let's examine one such case. It might be a bit contrived, unrepresentative and unfair, but on the other hand it's the one I already had available.

Suppose you have to write script that opens a text file, parses it as UTF-8, splits it into words according to white space, counts the number of time each word appears and prints the words and counts in decreasing order (most common first).

The Python baseline

This sounds like a job for Python. Indeed, an implementation takes fewer than 30 lines of code. Its memory consumption on a small text file [update: repo's readme, which is 1.3k] looks like this.

Image

Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.

The native version

A fully native C++ version using Pystd requires 60 lines of code to implement the same thing. If you ignore the boilerplate, the core functionality fits in 20 lines. The steps needed are straightforward:

  1. Mmap the input file to memory.
  2. Validate that it is utf-8
  3. Convert raw data into a utf-8 view
  4. Split the view into words lazily
  5. Compute the result into a hash table whose keys are string views, not strings

The main advantage of this is that there are no string objects. The only dynamic memory allocations are for the hash table and the final vector used for sorting and printing. All text operations use string views , which are basically just a pointer + size.

In code this looks like the following:

Image

Its memory usage looks like this.

Image

Peak consumption is ~100 kB in this implementation. It uses only 7.7% of the amount of memory required by the Python version.

Isn't this a bit unfair towards Python?

In a way it is. The Python runtime has a hefty startup cost but in return you get a lot of functionality for free. But if you don't need said functionality, things start looking very different.

But we can make this comparison even more unfair towards Python. If you look at the memory consumption graph you'll quite easily see that 70 kB is used by the C++ runtime. It reserves a bunch of memory up front so that it can do stack unwinding and exception handling even when the process is out of memory. It should be possible to build this code without exception support in which case the total memory usage would be a mere 21 kB. Such version would yield a 98.4% reduction in memory usage.

Colin Walters: Agent security is just security

23 March 2026 at 13:51

Suddenly I have been hearing the term Landlock more in (agent) security circles. To me this is a bit weird because while Landlock is absolutely a useful Linux security tool, it’s been a bit obscure and that’s for good reason. It feels to me a lot like the how weird prevalence of the word delve became a clear tipoff that LLMs were the ones writing, not a human.

Here’s my opinion: Agentic LLM AI security is just security.

We do not need to reinvent any fundamental technologies for this. Most uses of agents one hears about provide the ability to execute arbitrary code as a feature. It’s how OpenCode, Claude Code, Cursor, OpenClaw and many more work.

Especially let me emphasize since OpenClaw is popular for some reason right now: You should absolutely not give any LLM tool blanket read and write access to your full user account on your computer. There are many issues with that, but everyone using an LLM needs to understand just how dangerous prompt injection can be. This post is just one of many examples. Even global read access is dangerous because an attacker could exfiltrate your browser cookies or other files.

Let’s go back to Landlock – one prominent place I’ve seen it mentioned is in this project nono.sh pitches itself as a new sandbox for agents. It’s not the only one, but indeed it heavily leans on Landlock on Linux. Let’s dig into this blog post from the author. First of all, I’m glad they are working on agentic security. We both agree: unsandboxed OpenClaw (and other tools!) is a bad idea.

Here’s where we disagree:

With AI agents, the core issue is access without boundaries. We give agents our full filesystem permissions because that’s how Unix works. We give them network access because they need to call APIs. We give them access to our SSH keys, our cloud credentials, our shell history, our browser cookies – not because they need any of that, but because we haven’t built the tooling to say “you can have this, but not that.”

No. We have had usable tooling for “you can have this, but not that” for well over a decade. Docker kicked off a revolution for a reason: docker run <app> is “reasonably completely isolated” from the host system. Since then of course, there’s many OCI runtime implementations, from podman to apple/container on MacOS and more.

If you want to provide the app some credentials, you can just use bind mounts to provide them like docker|podman|ctr -v ~/.config/somecred.json:/etc/cred.json:ro. Notice there the ro which makes it readonly. Yes, it’s that straightforward to have “this but not that”.

Other tools like Flatpak on Linux have leveraged Linux kernel namespacing similar to this to streamline running GUI apps in an isolated way from the host. For a decade.

There’s far more sophisticated tooling built on top of similar container runtimes since then, from having them transparently backed by virtual machines, Kubernetes and similar projects are all about running containers at scale with lots of built up security knowledge.

That doesn’t need reinventing. It’s generic workload technology, and agentic AI is just another workload from the perspective of kernel/host level isolation. There absolutely are some new, novel risks and issues of course: but again the core principle here is we don’t need to reinvent anything from the kernel level up.

Security here really needs to start from defaulting to fully isolating (from the host and other apps), and then only allow-listing in what is needed. That’s again how docker run worked from the start. Also on this topic, Flatpak portals are a cool technology for dynamic resource access on a single host system.

So why do I think Landlock is obscure? Basically because most workloads should already be isolated already per above, and Landlock has heavy overlap with the wide variety of Linux kernel security mechanisms already in use in containers.

The primary pitch of Landlock is more for an application to further isolate itself – it’s at its best when it’s a complement coarse-grained isolation techniques like virtualization or containers. One way to think of it is that often container runtimes don’t grant privileges needed for an application to further spawn its own sub-containers (for kernel attack surface reasons), but Landlock is absolutely a reasonable thing for an app to use to e.g. disable networking from a sub-process that doesn’t need it, etc.

Of course the challenge is that not every app is easy to run in a container or virtual machine. Some workloads are most convenient with that “ambient access” to all of your data (like an IDE or just a file browser).

But giving that ambient access by default to agentic AI is a terrible idea. So don’t do it: use (OCI) containers and allowlist in what you need.

(There’s other things nono is doing here that I find dubious/duplicative; for example I don’t see the need for a new filesystem snapshotting system when we have both git and OCI)

But I’m not specifially trying to pick on nono – just in the last two weeks I had to point out similar problems in two different projects I saw go by also pitched for AI security. One used bubblewrap, but with insufficient sandboxing, and the other was also trying to use Landlock.

On the other hand, I do think the credential problem (that nono and others are trying to address in differnet ways) is somewhat specific to agentic AI, and likely does need new tooling. When deploying a typical containerized app usually one just provisions a few relatively static credentials. In contrast, developer/user agentic AI is often a lot more freeform and dynamic, and while it’s hard to get most apps to leak credentials without completely compromising it, it’s much easier with agentic AI and prompt injection. I have thoughts on credentials, and absolutely more work here is needed.

It’s great that people want to work on FOSS security, and AI could certainly use more people thinking about security. But I don’t think we need “next generation” security here: we should build on top of the “previous generation”. I actually use plain separate Unix users for isolation for some things, which works quite well! Running OpenShell in a secondary user account where one only logs into a select few things (i.e. not your email and online banking) is much more reasonable, although clearly a lot of care is still needed. Landlock is a fine technology but is just not there as a replacement for other sandboxing techniques. So just use containers and virtual machines because these are proven technologies. And if you take one message away from this: absolutely don’t wire up an LLM via OpenShell or a similar tool to your complete digital life with no sandboxing.

Khrys’presso du lundi 23 mars 2026

23 March 2026 at 06:42

Comme chaque lundi, un coup d’œil dans le rétroviseur pour découvrir les informations que vous avez peut-être ratées la semaine dernière.


Tous les liens listés ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez à activer votre bloqueur de javascript favori ou à passer en “mode lecture” (Firefox) ;-)

Brave New World

Spécial IA

Spécial guerre(s) au Moyen-Orient

RIP (or not)

Spécial femmes dans le monde

Spécial France

Spécial femmes en France

Spécial médias et pouvoir

Spécial emmerdeurs irresponsables gérant comme des pieds (et à la néolibérale)

Spécial recul des droits et libertés, violences policières, montée de l’extrême-droite…

Spécial résistances

Spécial outils de résistance

Spécial GAFAM et cie

Les autres lectures de la semaine

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

Les trucs chouettes de la semaine

Image

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Matthew Garrett: SSH certificates and git signing

21 March 2026 at 19:38

When you’re looking at source code it can be helpful to have some evidence indicating who wrote it. Author tags give a surface level indication, but it turns out you can just lie and if someone isn’t paying attention when merging stuff there’s certainly a risk that a commit could be merged with an author field that doesn’t represent reality. Account compromise can make this even worse - a PR being opened by a compromised user is going to be hard to distinguish from the authentic user. In a world where supply chain security is an increasing concern, it’s easy to understand why people would want more evidence that code was actually written by the person it’s attributed to.

git has support for cryptographically signing commits and tags. Because git is about choice even if Linux isn’t, you can do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You’re probably going to be unsurprised about my feelings around OpenPGP and the web of trust, and X.509 certificates are an absolute nightmare. That leaves SSH keys, but bare cryptographic keys aren’t terribly helpful in isolation - you need some way to make a determination about which keys you trust. If you’re using someting like GitHub you can extract that information from the set of keys associated with a user account1, but that means that a compromised GitHub account is now also a way to alter the set of trusted keys and also when was the last time you audited your keys and how certain are you that every trusted key there is still 100% under your control? Surely there’s a better way.

SSH Certificates

And, thankfully, there is. OpenSSH supports certificates, an SSH public key that’s been signed by some trusted party and so now you can assert that it’s trustworthy in some form. SSH Certificates also contain metadata in the form of Principals, a list of identities that the trusted party included in the certificate. These might simply be usernames, but they might also provide information about group membership. There’s also, unsurprisingly, native support in SSH for forwarding them (using the agent forwarding protocol), so you can keep your keys on your local system, ssh into your actual dev system, and have access to them without any additional complexity.

And, wonderfully, you can use them in git! Let’s find out how.

Local config

There’s two main parameters you need to set. First,

1
git config set gpg.format ssh

because unfortunately for historical reasons all the git signing config is under the gpg namespace even if you’re not using OpenPGP. Yes, this makes me sad. But you’re also going to need something else. Either user.signingkey needs to be set to the path of your certificate, or you need to set gpg.ssh.defaultKeyCommand to a command that will talk to an SSH agent and find the certificate for you (this can be helpful if it’s stored on a smartcard or something rather than on disk). Thankfully for you, I’ve written one. It will talk to an SSH agent (either whatever’s pointed at by the SSH_AUTH_SOCK environment variable or with the -agent argument), find a certificate signed with the key provided with the -ca argument, and then pass that back to git. Now you can simply pass -S to git commit and various other commands, and you’ll have a signature.

Validating signatures

This is a bit more annoying. Using native git tooling ends up calling out to ssh-keygen2, which validates signatures against a file in a format that looks somewhat like authorized-keys. This lets you add something like:

1
* cert-authority ssh-rsa AAAA…

which will match all principals (the wildcard) and succeed if the signature is made with a certificate that’s signed by the key following cert-authority. I recommend you don’t read the code that does this in git because I made that mistake myself, but it does work. Unfortunately it doesn’t provide a lot of granularity around things like “Does the certificate need to be valid at this specific time” and “Should the user only be able to modify specific files” and that kind of thing, but also if you’re using GitHub or GitLab you wouldn’t need to do this at all because they’ll just do this magically and put a “verified” tag against anything with a valid signature, right?

Haha. No.

Unfortunately while both GitHub and GitLab support using SSH certificates for authentication (so a user can’t push to a repo unless they have a certificate signed by the configured CA), there’s currently no way to say “Trust all commits with an SSH certificate signed by this CA”. I am unclear on why. So, I wrote my own. It takes a range of commits, and verifies that each one is signed with either a certificate signed by the key in CA_PUB_KEY or (optionally) an OpenPGP key provided in ALLOWED_PGP_KEYS. Why OpenPGP? Because even if you sign all of your own commits with an SSH certificate, anyone using the API or web interface will end up with their commits signed by an OpenPGP key, and if you want to have those commits validate you’ll need to handle that.

In any case, this should be easy enough to integrate into whatever CI pipeline you have. This is currently very much a proof of concept and I wouldn’t recommend deploying it anywhere, but I am interested in merging support for additional policy around things like expiry dates or group membership.

Doing it in hardware

Of course, certificates don’t buy you any additional security if an attacker is able to steal your private key material - they can steal the certificate at the same time. This can be avoided on almost all modern hardware by storing the private key in a separate cryptographic coprocessor - a Trusted Platform Module on PCs, or the Secure Enclave on Macs. If you’re on a Mac then Secretive has been around for some time, but things are a little harder on Windows and Linux - there’s various things you can do with PKCS#11 but you’ll hate yourself even more than you’ll hate me for suggesting it in the first place, and there’s ssh-tpm-agent except it’s Linux only and quite tied to Linux.

So, obviously, I wrote my own. This makes use of the go-attestation library my team at Google wrote, and is able to generate TPM-backed keys and export them over the SSH agent protocol. It’s also able to proxy requests back to an existing agent, so you can just have it take care of your TPM-backed keys and continue using your existing agent for everything else. In theory it should also work on Windows3 but this is all in preparation for a talk I only found out I was giving about two weeks beforehand, so I haven’t actually had time to test anything other than that it builds.

And, delightfully, because the agent protocol doesn’t care about where the keys are actually stored, this still works just fine with forwarding - you can ssh into a remote system and sign something using a private key that’s stored in your local TPM or Secure Enclave. Remote use can be as transparent as local use.

Wait, attestation?

Ah yes you may be wondering why I’m using go-attestation and why the term “attestation” is in my agent’s name. It’s because when I’m generating the key I’m also generating all the artifacts required to prove that the key was generated on a particular TPM. I haven’t actually implemented the other end of that yet, but if implemented this would allow you to verify that a key was generated in hardware before you issue it with an SSH certificate - and in an age of agentic bots accidentally exfiltrating whatever they find on disk, that gives you a lot more confidence that a commit was signed on hardware you own.

Conclusion

Using SSH certificates for git commit signing is great - the tooling is a bit rough but otherwise they’re basically better than every other alternative, and also if you already have infrastructure for issuing SSH certificates then you can just reuse it4 and everyone wins.


  1. Did you know you can just download people’s SSH pubkeys from github from https://github.com/<username>.keys? Now you do ↩︎

  2. Yes it is somewhat confusing that the keygen command does things other than generate keys ↩︎

  3. This is more difficult than it sounds ↩︎

  4. And if you don’t, by implementing this you now have infrastructure for issuing SSH certificates and can use that for SSH authentication as well. ↩︎

❌