planet grml

January 21, 2026

Evgeni Golov

Image

Validating cloud-init configs without being root

Somehow this whole DevOps thing is all about generating the wildest things from some (usually equally wild) template.

And today we're gonna generate YAML from ERB, what could possibly go wrong?!

Well, actually, quite a lot, so one wants to validate the generated result before using it to break systems at scale.

The YAML we generate is a cloud-init cloud-config, and while checking that we generated a valid YAML document is easy (and we were already doing that), it would be much better if we could check that cloud-init can actually use it.

Enter cloud-init schema, or so I thought. Turns out running cloud-init schema is rather broken without root privileges, as it tries to load a ton of information from the running system. This seems like a bug (or multiple), as the data should not be required for the validation of the schema itself. I've not found a way to disable that behavior.

Luckily, I know Python.

Enter evgeni-knows-better-and-can-write-python:

#!/usr/bin/env python3

import sys
from cloudinit.config.schema import get_schema, validate_cloudconfig_file, SchemaValidationError

try:
    valid = validate_cloudconfig_file(config_path=sys.argv[1], schema=get_schema())
    if not valid:
        raise RuntimeError("Schema is not valid")
except (SchemaValidationError, RuntimeError) as e:
    print(e)
    sys.exit(1)

The canonical1 version if this lives in the Foreman git repo, so go there if you think this will ever receive any updates.

The hardest part was to understand thevalidate_cloudconfig_file API, as it will sometimes raise an SchemaValidationError, sometimes a RuntimeError and sometimes just return False. No idea why. But the above just turns it into a couple of printed lines and a non zero exit code, unless of course there are no problems, then you get peaceful silence.

by evgeni at January 21, 2026 07:42 PM

January 10, 2026

Michael Prokop

Image

Bookdump 2025

Foto der hier vorgestellten Bücher

Mein Lesejahr 2025 war mit durchschnittlich bisschen mehr als einem Buch pro Woche vergleichbar mit 2024. Mein Best-Of der von mir 2025 fertig gelesenen Bücher (jene die ich besonders lesenswert fand bzw. empfehlen möchte, die Reihenfolge entspricht dem Foto und stellt keinerlei Reihung dar):

  • Russische Spezialitäten, Dmitrij Kapitelman. Was für ein Feuerwerk von einem Buch, sprachgewaltig, traurig, amüsant.
  • Die Jungfrau, Monika Helfer. Nach Helfers “Die Bagage”, “Löwenherz” und “Vati” war natürlich auch dieses Buch Pflichtlektüre für mich.
  • Das Buch zum Film, Clemens J. Setz. Wunderbare Alltagsbeobachtungen und Bonmots – ich hab eigentlich nur eine Kritik: mit 192 Seiten zu kurz.
  • Wackelkontakt, Wolf Haas. Jaja, ein bekannter Bestseller etc. Aber er ist und bleibt einer meiner Lieblingsautoren. Ich war bei seiner Lesung in Graz und habe das Buch im Anschluss sogar noch ein zweites Mal gelesen, und es keine Sekunde bereut. Sprachkünstler, Hilfsausdruck!
  • Fleisch ist mein Gemüse, Heinz Strunk. Ich liebe Background-Geschichten, speziell wenn es um Musik bzw. das Musikerleben geht, und das ist hier mit dem Ausflug in die Branche der Tanzmusik der Fall. Bis auf einige wenige Ausnahmen flutscht es beim Lesen.
  • Wut und Wertung: Warum wir über Geschmack streiten, Johannes Franzen. Warum eskalieren Konflikte über Geschmack, Kunst und Kanon? Warum ist Streiten über Geschmack eine wichtige Kulturtechnik? Franzen arbeitet das anhand von tatsächlich existierenden Kontroversen und Skandalen auf, lehrreich und anregend.
  • Klapper, Kurt Prödel. Fans von Clemens J. Setz kennen natürlich Prödel, und da ich auch Coming-of-Age-Romanen mag, war das ein doppelter Volltreffer. Ich freue mich schon auf sein neues Buch “Salto”!
  • Hier treibt mein Kartoffelherz, Anna Weidenholzer. Ich kann absolut nichts mehr zu diesem Buch sagen, aber ich hab’s echt gern gelesen.
  • Die Infantin trägt den Scheitel links, Helena Adler. Das Buch hatte einen interessanten Sog auf mich, ich wollte es einfach weiterlesen. Die verspielte Sprache und Wortspiele haben es noch feiner gemacht.
  • Das schöne Leben, Christiane Rösinger. Ich hab Rösingers Bücher von Kathrin Passig empfohlen bekommen (Volltreffer, danke!). Ich hab mir auch alle anderen Bücher von Rösinger (“Berlin – Baku. Meine Reise zum Eurovision Song Contest”, “Zukunft machen wir später: Meine Deutschstunden mit Geflüchteten”, “Liebe wird oft überbewertet”) besorgt, und sehr gerne gelesen.

by mika at January 10, 2026 05:29 PM

December 14, 2025

Evgeni Golov

Image

Home Assistant, Govee Lights Local, VLANs, Oh my!

We recently bought some Govee Glide Hexa Light Panels, because they have a local LAN API that is well integrated into Home Assistant. Or so we thought.

Our network is not that complicated, but there is a dedicated VLAN for IOT devices. Home Assistant runs in a container (with network=host) on a box in the basement, and that box has a NIC in the IOT VLAN so it can reach devices there easily. So far, this has never been a problem.

Enter the Govee LAN API. Or maybe its Python implementation. Not exactly sure who's to blame here.

The API involves sending JSON over multicast, which the Govee device will answer to.

No devices found on the network

After turning logging for homeassistant.components.govee_light_local to 11, erm debug, we see:

DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] Starting discovery with IP 192.168.42.2
DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] No devices found with IP 192.168.42.2

That's not the IP address in the IOT VLAN!

Turns out the integration recently got support for multiple NICs, but Home Assistant doesn't just use all the interfaces it sees by default.

You need to go to SettingsNetworkNetwork adapter and deselect "Autoconfigure", which will allow your to select individual interfaces.

Once you've done that, you'll see Starting discovery with IP messages for all selected interfaces and adding of Govee Lights Local will work.

by evgeni at December 14, 2025 03:48 PM

December 12, 2025

grml development blog

Image

Grml - new stable release 2025.12 available

We are proud to announce our new stable release 🚢 version 2025.12, code-named ‘Postwurfsendung’!

Grml is a bootable live system (Live CD) based on Debian. Grml 2025.12 brings you fresh software packages from Debian testing/forky, enhanced hardware support and addresses known bugs from previous releases.

Like in the previous release 2025.08, Live ISOs 📀 are available for 64-bit x86 (amd64) and 64-bit ARM CPUs (arm64).

❤️ Thanks ❤️

Once again netcup contributed financially, this time specifically to this release. Thank you, netcup ❤️

December 12, 2025 11:12 AM

December 03, 2025

Michael Prokop

Image

HTU Bigband: Weihnachtskonzert am 12.12.2025

HTU Bigband Weihnachtsfeier

Weihnachten im Dezember – und das mit guter Bigband-Musik! 🎄

Am 12. Dezember 2025 laden wir euch zu einem Weihnachtskonzert der besonderen Art ein – mit der HTU Bigband im Mo.xx! Es gibt Jazz-Rock, Swing, Soul-Pop, Latin + Funk und natürlich jede Menge gute Stimmung. Einlass ist um 19 Uhr, Eintritt freiwillige Spende, und ab 19:30 Uhr gibt es dann feine Musik auf die Ohren. 🎶

🎵 Was: feine Bigband-Musik
⏰ Wann: Freitag, 12. Dezember 2025, ab 19:30 Uhr
📍 Wo: mo.xx, Moserhofgasse 34, Graz

Kommt vorbei, am Besten mit Familie und Freunden! 🎅

by mika at December 03, 2025 07:20 PM

November 14, 2025

Michael Prokop

Image

HTU Bigband @ 30 Jahre Radio Helsinki am 22.11.2025

HTU Bigband @ 30 Jahre Radio Helsinki

Radio Helsinki wird 30, und feiert das mit einem prall gefüllten Musikprogramm am Samstag, 22.11.2025 im Forum Stadtpark. 🥳 Wir sind mit der HTU-Bigband mit dabei, und spielen ab ~19:30 Uhr für ~1,5 Stunden. Schaut vorbei und feiert mit!

PS: wer es wirklich nicht hinschafft, möge zumindest das Radio einschalten oder den Livestream aufdrehen! 🤓

Foto-Quelle / Copyright: graz.social/@radiohelsinki/115428633169757433

by mika at November 14, 2025 04:08 PM

September 22, 2025

Evgeni Golov

Image

Booting Vagrant boxes with UEFI on Fedora: Permission denied

If you're still using Vagrant (I am) and try to boot a box that uses UEFI (like boxen/debian-13), a simple vagrant init boxen/debian-13 and vagrant up will entertain you with a nice traceback:

% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Removing domain...
==> default: Deleting the machine folder
/usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Libvirt::Domain#create': Call to virDomainCreate failed: internal error: process exited while connecting to monitor: 2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}: Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Fog::Libvirt::Compute::Shared#vm_action'
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/models/compute/server.rb:81:in 'Fog::Libvirt::Compute::Server#start'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/start_domain.rb:546:in 'VagrantPlugins::ProviderLibvirt::Action::StartDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_boot_order.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::SetBootOrder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/share_folders.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::ShareFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_settings.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folders.rb:87:in 'Vagrant::Action::Builtin::SyncedFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/delayed.rb:19:in 'Vagrant::Action::Builtin::Delayed#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in 'Vagrant::Action::Builtin::SyncedFolderCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/plugins/synced_folders/nfs/action_cleanup.rb:25:in 'VagrantPlugins::SyncedFolderNFS::ActionCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_valid_ids.rb:14:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSValidIds#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_network_interfaces.rb:197:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworkInterfaces#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_networks.rb:40:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworks#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain.rb:452:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/resolve_disk_settings.rb:143:in 'VagrantPlugins::ProviderLibvirt::Action::ResolveDiskSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain_volume.rb:97:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomainVolume#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_box_image.rb:127:in 'VagrantPlugins::ProviderLibvirt::Action::HandleBoxImage#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/handle_box.rb:56:in 'Vagrant::Action::Builtin::HandleBox#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_storage_pool.rb:63:in 'VagrantPlugins::ProviderLibvirt::Action::HandleStoragePool#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_name_of_domain.rb:34:in 'VagrantPlugins::ProviderLibvirt::Action::SetNameOfDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/provision.rb:80:in 'Vagrant::Action::Builtin::Provision#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/cleanup_on_failure.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::CleanupOnFailure#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/box_check_outdated.rb:93:in 'Vagrant::Action::Builtin::BoxCheckOutdated#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/config_validate.rb:25:in 'Vagrant::Action::Builtin::ConfigValidate#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:248:in 'Vagrant::Machine#action_raw'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:217:in 'block in Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/environment.rb:631:in 'Vagrant::Environment#lock'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Method#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/batch_action.rb:86:in 'block (2 levels) in Vagrant::BatchAction#run'

The important part here is

Call to virDomainCreate failed: internal error: process exited while connecting to monitor:
2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}:
Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)

Of course we checked that the file permissions on this file are correct (I'll save you the ls output), so what's next? Yes, of course, SELinux!

# ausearch -m AVC
time->Mon Sep 22 12:07:55 2025
type=AVC msg=audit(1758535675.080:1613): avc:  denied  { read } for  pid=257204 comm="qemu-system-x86" name="OVMF_CODE.fd" dev="dm-2" ino=1883946 scontext=unconfined_u:unconfined_r:svirt_t:s0:c352,c717 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0

A process in the svirt_t domain tries to access something in the user_home_t domain and is denied by the kernel. So far, SELinux is both working as designed and preventing us from doing our work, nice.

For "normal" (non-UEFI) boxes, Vagrant uploads the image to libvirt, which stores it in ~/.local/share/libvirt/images/ and boots fine from there. For UEFI boxen, one also needs loader and nvram files, which Vagrant keeps in ~/.vagrant.d/boxes/<box_name> and that's what explodes in our face here.

As ~/.local/share/libvirt/images/ works well, and is labeled svirt_home_t let's see what other folders use that label:

# semanage fcontext -l |grep svirt_home_t
/home/[^/]+/\.cache/libvirt/qemu(/.*)?             all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.config/libvirt/qemu(/.*)?            all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.libvirt/qemu(/.*)?                   all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/gnome-boxes/images(/.*)? all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/boot(/.*)?       all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/images(/.*)?     all files          unconfined_u:object_r:svirt_home_t:s0

Okay, that all makes sense, and it's just missing the Vagrant-specific folders!

# semanage fcontext -a -t svirt_home_t '/home/[^/]+/\.vagrant.d/boxes(/.*)?'

Now relabel the Vagrant boxes:

% restorecon -rv ~/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/metadata_url from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_0.img from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/metadata.json from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/Vagrantfile from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_VARS.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_update_check from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0

And it works!

% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Domain launching with graphics connection settings...
==> default:  -- Graphics Port:      5900
==> default:  -- Graphics IP:        127.0.0.1
==> default:  -- Graphics Password:  Not defined
==> default:  -- Graphics Websocket: 5700
==> default: Waiting for domain to get an IP address...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 192.168.124.157:22
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!

by evgeni at September 22, 2025 10:37 AM

August 16, 2025

grml development blog

Image

Grml - new stable release 2025.08 available

We are proud to announce our new stable release 🚢 version 2025.08, code-named ‘Oneinonein’!

Grml is a bootable live system (Live CD) based on Debian. Grml 2025.08 is based on the newly released Debian 13 trixie and addresses known bugs from previous releases.

Like in the previous release 2025.05, Live ISOs 📀 are available for 64-bit x86 (amd64) and 64-bit ARM CPUs (arm64).

❤️ Thanks ❤️

Without Debian this project would not exist, and we would like to celebrate with Debian their 32nd birthday!

August 16, 2025 02:00 PM

July 20, 2025

Michael Prokop

Image

What to expect from Debian/trixie #newintrixie

Trixie Banner, Copyright 2024 Elise Couper

Update on 2025-07-28: added note about Debian 13/trixie support for OpenVox (thanks, Ben Ford!)

Debian v13 with codename trixie is scheduled to be published as new stable release on 9th of August 2025.

I was the driving force at several of my customers to be well prepared for the upcoming stable release (my efforts for trixie started in August 2024). On the one hand, to make sure packages we care about are available and actually make it into the release. On the other hand, to ensure there are no severe issues that make it into the release and to get proper and working upgrades. So far everything is looking pretty well and working fine, the efforts seemed to have payed off. :)

As usual with major upgrades, there are some things to be aware of, and hereby I’m starting my public notes on trixie that might be worth for other folks. My focus is primarily on server systems and looking at things from a sysadmin perspective.

Further readings

As usual start at the official Debian release notes, make sure to especially go through What’s new in Debian 13 + issues to be aware of for trixie (strongly recommended read!).

Package versions

As a starting point, let’s look at some selected packages and their versions in bookworm vs. trixie as of 2025-07-20 (mainly having amd64 in mind):

Package bookworm/v12 trixie/v13
ansible 2.14.3 2.19.0
apache 2.4.62 2.4.64
apt 2.6.1 3.0.3
bash 5.2.15 5.2.37
ceph 16.2.11 18.2.7
docker 20.10.24 26.1.5
dovecot 2.3.19 2.4.1
dpkg 1.21.22 1.22.21
emacs 28.2 30.1
gcc 12.2.0 14.2.0
git 2.39.5 2.47.2
golang 1.19 1.24
libc 2.36 2.41
linux kernel 6.1 6.12
llvm 14.0 19.0
lxc 5.0.2 6.0.4
mariadb 10.11 11.8
nginx 1.22.1 1.26.3
nodejs 18.13 20.19
openjdk 17.0 21.0
openssh 9.2p1 10.0p1
openssl 3.0 3.5
perl 5.36.0 5.40.1
php 8.2+93 8.4+96
podman 4.3.1 5.4.2
postfix 3.7.11 3.10.3
postgres 15 17
puppet 7.23.0 8.10.0
python3 3.11.2 3.13.5
qemu/kvm 7.2 10.0
rsync 3.2.7 3.4.1
ruby 3.1 3.3
rust 1.63.0 1.85.0
samba 4.17.12 4.22.3
systemd 252.36 257.7-1
unattended-upgrades 2.9.1 2.12
util-linux 2.38.1 2.41
vagrant 2.3.4 2.3.7
vim 9.0.1378 9.1.1230
zsh 5.9 5.9

Misc unsorted

  • The asterisk package once again didn’t make it into trixie (see #1031046)
  • The new debian-repro-status package provides the identically named command-line tool debian-repro-status to query the reproducibility status of your installed Debian packages
  • The Grml live system project provided further of their packages into Debian. Available as of trixie are now also grml-keyring (OpenPGP certificates used for signing the Grml repositories), grml-hwinfo (a tool which collects information of the hardware ) + grml-paste (command line interface for paste.debian.net)
  • If you use pacemaker, be aware that its fence-agents package is now a transitional package. All the fence-agents got split into separate packages (fence-agents-$whatever). If you want to have all the fence-agents available, make sure to install the fence-agents-all package. If you have Recommends disabled, you definitely should be aware of this.
  • usrmerge is finalized (also see dpkg warning issue in release notes)
  • For an overview of the XMPP/Jabber situation in trixie see xmpp-team’s blog post
  • The curl package now includes the wcurl command line tool, being a simple wrapper around curl to easily download files

apt

The new apt version 3.0 brings several new features, including:

  • support for colors (f.e. green for installs/upgrades, yellow for downgrades, red for removals, can be disabled via &dash&dashno-color, APT_NO_COLOR=1 or NO_COLOR=1 and customized via e.g. APT::Color::Action::Install “cyan”)
  • organizes output in more readable sections and shows removals more prominently
  • uses sequoia to verify signatures
  • includes a new solver
  • the new apt modernize-sources command converts /etc/apt/sources.list.d/*.list files into the new .sources format (AKA DEB822)
  • the new apt distclean command removes all files under $statedir/lists except Release, Release.gpg, and InRelease (it can be used for example, when finalizing images distributed to users)
  • new configuration option APT::NeverAutoRemove::KernelCount for keeping a configurable amount of kernels, f.e. setting APT::NeverAutoRemove::KernelCount 3 will keep 3 kernels (including the running, and most recent)
  • new command line option &dash&dashsnapshot, and configuration option APT::Snapshot, controlling the snapshot chosen for archives with Snapshot: enable
  • new command line option &dash&dashupdate to run the update command before the specified command, like apt &dash&dashupdate install zsh,
    apt &dash&dashupdate remove foobar or apt &dash&dashupdate safe-upgrade
  • apt-key is gone, and there’s no replacement for it available (if you need an interface for listing present keys)

systemd

systemd got upgraded from v252.36-1~deb12u1 to 257.7-1 and there are lots of changes.

Be aware that systemd v257 has a new net.naming_scheme, v257 being PCI slot number is now read from firmware_node/sun sysfs file. The naming scheme based on devicetree aliases was extended to support aliases for individual interfaces of controllers with multiple ports. This might affect you, see e.g. #1092176 and #1107187, the Debian Wiki provides further useful information.

There are new systemd tools available:

  • run0: temporarily and interactively acquire elevated or different privileges (serves a similar purpose as sudo)
  • systemd-ac-power: Report whether we are connected to an external power source
  • systemd-confext: Activates System Extension Images
  • systemd-vpick: Resolve paths to ‘.v/’ versioned directories
  • varlinkctl: Introspect with and invoke Varlink services

The tools provided by systemd gained several new options:

  • busctl: new option &dash&dashlimit&dashmessages=NUMBER (Stop monitoring after receiving the specified number of message)
  • hostnamectl: new option &dashj (same as &dash&dashjson=pretty on tty, &dash&dashjson=short otherwise)
  • journalctl: new options &dash&dashimage&dashpolicy=POLICY (Specify disk image dissection policy), &dash&dashinvocation=ID (Show logs from the matching invocation ID), &dashI (Show logs from the latest invocation of unit), &dash&dashexclude-identifier=STRING (Hide entries with the specified syslog identifier),&dash&dashtruncate-newline (Truncate entries by first newline character), &dash&dashlist-invocations (Show invocation IDs of specified unit), &dash&dashlist-namespaces (Show list of journal namespaces)
  • kernel-install: new commands add&dashall + list and plenty of new command line options
  • localectl: new option &dash&dashfull (Do not ellipsize output)
  • loginctl: new options &dash&dashjson=MODE (Generate JSON output for list-sessions/users/seats) + &dashj (Same as &dash&dashjson=pretty on tty, &dash&dashjson=short otherwise)
  • networkctl: new commands edit FILES|DEVICES… (Edit network configuration files), cat [FILES|DEVICES…] (Show network configuration files), mask FILES… (Mask network configuration files) + unmask FILES… (Unmask network configuration files) + persistent-storage BOOL (Notify systemd-networkd if persistent storage is ready), and new options &dash&dashno-ask-password (Do not prompt for password), &dash&dashno-reload (Do not reload systemd-networkd or systemd-udevd after editing network config), &dash&dashdrop-in=NAME (Edit specified drop-in instead of main config file), &dash&dashruntime (Edit runtime config files) + &dash&dashstdin (Read new contents of edited file from stdin)
  • systemctl” new commands list-paths [PATTERN] (List path units currently in memory, ordered by path), whoami [PID…] (Return unit caller or specified PIDs are part of), soft-reboot (Shut down and reboot userspace) + sleep (Put the system to sleep), and new options &dash&dashcapsule=NAME (Connect to service manager of specified capsule), &dash&dashbefore (Show units ordered before with ‘list-dependencies’), &dash&dashafter (Show units ordered after with ‘list-dependencies’), &dash&dashkill-value=INT (Signal value to enqueue), &dash&dashno-warn (Suppress several warnings shown by default), &dash&dashmessage=MESSAGE (Specify human readable reason for system shutdown), &dash&dashimage&dashpolicy=POLICY (Specify disk image dissection policy), &dash&dashreboot&dashargument=ARG (Specify argument string to pass to reboot()), &dash&dashdrop-in=NAME (Edit unit files using the specified drop-in file name), &dash&dashwhen=TIME (Schedule halt/power-off/reboot/kexec action after a certain timestamp) + &dash&dashstdin (Read
    new contents of edited file from stdin)
  • systemd-analyze” new commands architectures [NAME…] (List known architectures), smbios11 (List strings passed via SMBIOS Type #11), image-policy POLICY… (Analyze image policy string), fdstore SERVICE… (Show file descriptor store contents of service), malloc [D-BUS SERVICE…] (Dump malloc stats of a D-Bus service), has-tpm2 (Report whether TPM2 support is available), pcrs [PCR…] (Show TPM2 PCRs and their names) + srk [>FILE] (Write TPM2 SRK (to FILE)) and new options &dash&dashno-legend (Disable column headers and hints in plot with either &dash&dashtable or &dash&dashjson=), &dash&dashinstance=NAME (Specify fallback instance name for template units), &dash&dashunit=UNIT (Evaluate conditions and asserts of unit), &dash&dashtable (Output plot’s raw time data as a table), &dash&dashscale-svg=FACTOR (Stretch x-axis of plot by FACTOR (default: 1.0)), &dash&dashdetailed (Add more details to SVG plot), &dash&dashtldr (Skip comments and empty lines), &dash&dashimage
    -policy=POLICY (Specify disk image dissection policy) + &dash&dashmask (Parse parameter as numeric capability mask)
  • systemd-ask-password: new options &dash&dashuser (Ask only our own user’s agents) + &dash&dashsystem (Ask agents of the system and of all users)
  • systemd-cat: new option &dash&dashnamespace=NAMESPACE (Connect to specified journal namespace)
  • systemd-creds: new options &dash&dashuser (Select user-scoped credential encryption), &dash&dashuid=UID (Select user for scoped credentials) + &dash&dashallow-null (Allow decrypting credentials with empty key)
  • systemd-detect-virt: new options &dash&dashcvm (Only detect whether we are run in a confidential VM) + &dash&dashlist-cvm (List all known and detectable types of confidential virtualization)
  • systemd-firstboot: new options &dash&dashimage-policy=POLICY (Specify disk image dissection policy), &dash&dashkernel-command-line=CMDLINE (Set kernel command line) + &dash&dashreset (Remove existing files)
  • systemd-id128: new commands var-partition-uuid (Print the UUID for the /var/ partition) + show [NAME|UUID] (Print one or more UUIDs), and new options &dash&dashno-pager (Do not pipe output into a pager), &dash&dashno-legend (Do not show the headers and footers), &dash&dashjson=FORMAT (Output inspection data in JSON), &dashj (Equivalent to &dash&dashjson=pretty (on TTY) or &dash&dashjson=short (otherwise)) + &dashP &dash&dashvalue (Only print the value)
  • systemd-inhibit: new option &dash&dashno-ask-password (Do not attempt interactive authorization)
  • systemd-machine-id-setup: new option &dash&dashimage-policy=POLICY (Specify disk image dissection policy)
  • systemd-mount: new options &dash&dashjson=pretty|short|off (Generate JSON output) + &dash&dashtmpfs (Create a new tmpfs on the mount point)
  • systemd-notify: new options &dash&dashreloading (Inform the service manager about configuration reloading), &dash&dashstopping (Inform the service manager about service shutdown), &dash&dashexec (Execute command line separated by ‘;’ once done), &dash&dashfd=FD (Pass specified file descriptor with along with message) + &dash&dashfdname=NAME (Name to assign to passed file descriptor(s))
  • systemd-path: new option &dash&dashno-pager (Do not pipe output into a pager)
  • systemd-run: new options &dash&dashexpand-environment=BOOL (Control expansion of environment variables), &dash&dashjson=pretty|short|off (Print unit name and invocation id as JSON), &dash&dashignore-failure (Ignore the exit status of the invoked process) + &dash&dashbackground=COLOR (Set ANSI color for background)
  • systemd-sysext: new options &dash&dashmutable=yes|no|auto|import|ephemeral|ephemeral-import (Specify a mutability mode of the merged hierarchy), &dash&dashno-reload (Do not reload the service manager), &dash&dashimage-policy=POLICY (Specify disk image dissection policy) + &dash&dashnoexec=BOOL (Whether to mount extension overlay with noexec)
  • systemd-sysusers: new options &dash&dashtldr (Show non-comment parts of configuration) + &dash&dashimage-policy=POLICY (Specify disk image dissection policy)
  • systemd-tmpfiles: new command &dash&dashpurge(Delete files and directories marked for creation in specified configuration files (careful!)), and new options &dash&dashuser (Execute user configuration), &dash&dashtldr (Show non-comment parts of configuration files), &dash&dashgraceful (Quietly ignore unknown users or groups), &dash&dashimage-policy=POLICY (Specify disk image dissection policy) + &dash&dashdry-run (Just print what would be done)
  • systemd-umount: new options &dash&dashjson=pretty|short|off (Generate JSON output) + &dash&dashtmpfs (Create a new tmpfs on the mount point)
  • timedatectl: new commands ntp-servers INTERFACE SERVER (Set the interface specific NTP servers) + revert INTERFACE (Revert the interface specific NTP servers) and new option &dashP NAME (Equivalent to &dash&dashvalue &dash&dashproperty=NAME)

Debian’s systemd ships new binary packages:

  • systemd-boot-efi-amd64-signed (Tools to manage UEFI firmware updates (signed))
  • systemd-boot-tools (simple UEFI boot manager – tools)
  • systemd-cryptsetup (Provides cryptsetup, integritysetup and veritysetup utilities)
  • systemd-netlogd (journal message forwarder)
  • systemd-repart (Provides the systemd-repart and systemd-sbsign utilities)
  • systemd-standalone-shutdown (standalone shutdown binary for use in exitrds)
  • systemd-ukify (tool to build Unified Kernel Images)

Linux Kernel

The trixie release ships a Linux kernel based on latest longterm version 6.12. As usual there are lots of changes in the kernel area, including better hardware support, and this might warrant a separate blog entry. To highlight some changes with Debian trixie:

See Kernelnewbies.org for further changes between kernel versions.

Configuration management

For puppet users, Debian provides the puppet-agent (v8.10.0), puppetserver (v8.7.0) and puppetdb (v8.4.1) packages. Puppet’s upstream does not provide packages for trixie, yet. Given how long it took them for Debian bookworm, and with their recent Plans for Open Source Puppet in 2025, it’s unclear when (and whether at all) we might get something. As a result of upstream behavior, also the OpenVox project evolved, and they already provide Debian 13/trixie support (https://apt.voxpupuli.org/openvox8-release-debian13.deb). FYI: the AIO puppet-agent package for bookworm (v7.34.0-1bookworm) so far works fine for me on Debian/trixie. Be aware that due to the apt-key removal you need a recent version of the puppetlabs-apt for usage with trixie. The puppetlabs-ntp module isn’t yet ready for trixie (regarding ntp/ntpsec), if you should depend on that.

ansible is available and made it with version 2.19 into trixie.

Prometheus stack

Prometheus server was updated from v2.42.0 to v2.53, and all the exporters that got shipped with bookworm are still around (in more recent versions of course). Trixie gained some new exporters:

Virtualization

docker (v26.1.5), ganeti (v3.1.0), libvirt (v11.3.0, be aware of significant changes to libvirt packaging), lxc (v6.0.4), podman (v5.4.2), openstack (see openstack-team on Salsa), qemu/kvm (v10.0.2), xen (v4.20.0) are all still around.

Proxmox already announced their PVE 9.0 BETA, being based on trixie and providing 6.14.8-1 kernel, QEMU 10.0.2, LXC 6.0.4, OpenZFS 2.3.3.

Vagrant is available in version 2.3.7, but Vagrant upstream does not provide packages for trixie yet. Given that HashiCorp adopted the BSL, the future of vagrant in Debian is unclear.

If you’re relying on VirtualBox, be aware that upstream doesn’t provide packages for trixie, yet. VirtualBox is available from Debian/unstable (version 7.1.12-dfsg-1 as of 2025-07-20), but not shipped with stable release since quite some time (due to lack of cooperation from upstream on security support for older releases, see #794466). Be aware that starting with Linux kernel 6.12, KVM initializes virtualization on module loading by default. This prevents VirtualBox VMs from starting. In order to avoid this, either add “kvm.enable_virt_at_load=0” parameter into kernel command line or unload the corresponding kvm_intel / kvm_amd module.

If you want to use Vagrant with VirtualBox on trixie, be aware that Debian’s vagrant package as present in trixie doesn’t support the VirtualBox package version 7.1 as present in Debian/unstable (manually patching vagrant’s meta.rb and rebuilding the package without Breaks: virtualbox (>= 7.1) is known to be working).

util-linux

The are plenty of new options available in the tools provided by util-linux:

  • blkdiscard: new option &dash&dashquiet (suppress warning messages)
  • blockdev: new options &dash&dashgetdiskseq (get disk sequence number) + &dash&dashgetzonesz (get zone size)
  • dmesg: new option &dash&dashkmsg-file … (use the file in kmsg format), new &dash&dashtime-format … argument ‘raw’
  • findmnt: new options &dash&dashlist-columns (list the available columns), &dash&dashdfi (imitate the output of df(1) with -i option), &dash&dashid … (filter by mount node ID), &dash&dashfilter … (apply display filter) + &dash&dashuniq-id … (filter by
    mount node 64-bit ID)
  • fstrim: new option -types …. (limit the set of filesystem types)
  • hardlink: new options &dash&dashrespect-dir (directory names have to be identical), &dash&dashexclude-subtree … (regular expression to exclude directories), &dash&dashprioritize-trees (files found in the earliest specified top-level directory have higher priority), &dash&dashlist-duplicates (print every group of duplicate files), &dash&dashmount (stay within the same filesystem) + &dash&dashzero (delimit output with NULs instead of newlines)
  • ipcmk: new options &dash&dashposix-shmem … (create POSIX shared memory segment of size), &dash&dashposix-semaphore … (create POSIX semaphore), &dash&dashposix-mqueue … (create POSIX message queue) + &dash&dashname … (name of the POSIX resource)
  • ipcrm: new options &dash&dashposix-shmem … (remove POSIX shared memory segment by name), &dash&dashposix-mqueue … (remove POSIX message queue by name), &dash&dashposix-semaphore (remove POSIX semaphore by name) + &dash&dashall=… (remove all in specified category)
  • lsblk: new options &dash&dashct-filter … (restrict the next counter), &dash&dashct … (define a custom counter), &dash&dashhighlight … (colorize lines matching the expression), &dash&dashlist-columns (list the available columns), &dash&dashnvme (output info about NVMe devices), &dash&dashproperties-by … (methods used to gather data), &dash&dashfilter … (print only lines matching the expression), &dash&dashvirtio (output info about virtio devices)
  • lscpu: new options &dash&dashraw (use raw output format (for -e, -p and -C)) + &dash&dashhierarchic=… (use subsections in summary (auto, never, always))
  • lsipc: new options &dash&dashposix-shmems (POSIX shared memory segments), &dash&dashposix-mqueues (POSIX message queues), &dash&dashposix-semaphores (POSIX semaphores), &dash&dashname … (POSIX resource identified by name)
  • lslocks: new option &dash&dashlist-columns (list the available columns)
  • lslogins: new option &dash&dashlastlog2 … (set an alternate path for lastlog2)
  • lsns: new options &dash&dashpersistent (namespaces without processes), &dash&dashfilter … (apply display filter) + &dash&dashlist-columns (list the available columns)
  • mkswap: new options &dash&dashendianness=… (specify the endianness to use (native, little or big)), &dash&dashoffset … (specify the offset in the device), &dash&dashsize … (specify the size of a swap file in bytes) + &dash&dashfile (create a swap file)
  • namei: &dash&dashcontext (print any security context of each file)
  • nsenter: new options &dash&dashnet-socket … (enter socket’s network namespace), &dash&dashuser-parent (enter parent user namespace), &dash&dashkeep-caps (retain capabilities granted in user namespaces), &dash&dashenv (inherit environment variables from tar get process) + &dash&dashjoin-cgroup (join the cgroup of the target process)
  • runuser: new option &dash&dashno-pty (do not create a new pseudo-terminal)
  • setarch: new option &dash&dashshow=… (show current or specific personality and exit)
  • setpriv: new options &dash&dashptracer … (allow ptracing from the given process), &dash&dashlandlock-access … (add Landlock access), &dash&dashlandlock-rule … (add Landlock rule) + &dash&dashseccomp-filter … (load seccomp filter from file)
  • su: new option &dash&dashno-pty (do not create a new pseudo-terminal)
  • unshare: new option &dash&dashload-interp … ( load binfmt definition in the namespace)
  • whereis: new option -g (interpret name as glob (pathnames pattern))
  • wipefs: new argument option feature for &dash&dashbackup=… option to specify directory (instead of default $HOME)
  • zramctl: new option &dash&dashalgorithm-params … (algorithm parameters to use)

Now no longer present in util-linux as of trixie:

  • addpart (tell the kernel about the existence of a specified partition): use partx instead
  • delpart (tell the kernel to forget about a specified partition): use partx instead
  • last (show a listing of last logged in users, binary got moved to wtmpdb), lastb (show a listing of last logged in users), mesg (control write access of other users to your terminal), utmpdump (dump UTMP and WTMP files in raw format): see Debian release notes for details

The following binaries got moved from util-linux to the util-linux-extra package:

  • ctrlaltdel (set the function of the Ctrl-Alt-Del combination)
  • mkfs.bfs (make an SCO bfs filesystem)
  • fsck.cramfs + mkfs.cramfs (compressed ROM file system)
  • fsck.minix + mkfs.minix (Minix filesystem)
  • resizepart (tell the kernel about the new size of a partition)

And the util-linux-extra package also provides new tools:

  • bits: convert bit masks from/to various formats
  • blkpr: manage persistent reservations on a device
  • coresched: manage core scheduling cookies for tasks
  • enosys: utility to make syscalls fail with ENOSYS
  • exch: atomically exchanges paths between two files
  • fadvise: utility to use the posix_fadvise system call
  • pipesz: set or examine pipe buffer sizes and optionally execute command.
  • waitpid: utility to wait for arbitrary processes

OpenSSH

OpenSSH was updated from v9.2p1 to 10.0p1-5, so if you’re interested in all the changes, check out the release notes between those versions (9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9 + 10.0).

Let’s highlight some notable behavior changes in Debian:

There are some notable new features:

  • allow forwarding Unix Domain sockets via ssh -W
  • OpenSSH penalty behavior: visit my separate blog post for more details
  • add support for reading ED25519 private keys in PEM PKCS8 format. Previously only the OpenSSH private key format was supported.
  • the new hybrid post-quantum algorithm mlkem768x25519-sha256 (based on the FIPS 203 Module-Lattice Key Encapsulation mechanism (ML-KEM) combined with X25519 ECDH) is now used by default for key agreement. This algorithm is considered to be safe against attack by quantum computers, is guaranteed to be no less strong than the popular curve25519-sha256 algorithm, has been standardised by NIST and is considerably faster than the previous default.
  • the ssh-agent will now delete all loaded keys when signaled with SIGUSR1. This allows deletion of keys without having access to $SSH_AUTH_SOCK.
  • support systemd-style socket activation in ssh-agent using the LISTEN_PID/LISTEN_FDS mechanism. Activated when these environment variables are set, the agent is started with the -d or -D option and no socket path is set.
  • add a sshd -G option that parses and prints the effective configuration without attempting to load private keys and perform other checks. (This allows usage of the option before keys have been generated and for configuration evaluation and verification by unprivileged users.)
  • add support for configuration tags to ssh(1). This adds a ssh_config(5) “Tag” directive and corresponding “Match tag” predicate that may be used to select blocks of configuration.
  • add a “match localnetwork” predicate. This allows matching on the addresses of available network interfaces and may be used to vary the effective client configuration based on network location.
  • add a %j token that expands to the configured ProxyJump hostname
  • add support for “Match sessiontype” to ssh_config. Allows matching on the type of session initially requested, either “shell” for interactive sessions, “exec” for command execution sessions, “subsystem” for subsystem requests, such as sftp, or “none” for transport/forwarding-only sessions.
  • allow glob(3) patterns to be used in sshd_config AuthorizedKeysFile and AuthorizedPrincipalsFile directives.

Thanks to everyone involved in the release, looking forward to trixie + and happy upgrading!
Let’s continue with working towards Debian/forky. :)

by mika at July 20, 2025 04:32 PM

June 24, 2025

Evgeni Golov

Image

Using LXCFS together with Podman

JP was puzzled that using podman run --memory=2G … would not result in the 2G limit being visible inside the container. While we were able to identify this as a visualization problem — tools like free(1) only look at /proc/meminfo and that is not virtualized inside a container, you'd have to look at /sys/fs/cgroup/memory.max and friends instead — I couldn't leave it at that. And then I remembered there is actually something that can provide a virtual (cgroup-aware) /proc for containers: LXCFS!

But does it work with Podman?! I always used it with LXC, but there is technically no reason why it wouldn't work with a different container solution — cgroups are cgroups after all.

As we all know: there is only one way to find out!

Take a fresh Debian 12 VM, install podman and verify things behave as expected:

user@debian12:~$ podman run -ti --rm --memory=2G centos:stream9
bash-5.1# grep MemTotal /proc/meminfo
MemTotal:        6067396 kB
bash-5.1# cat /sys/fs/cgroup/memory.max
2147483648

And after installing (and starting) lxcfs, we can use the virtual /proc/meminfo it generates by bind-mounting it into the container (LXC does that part automatically for us):

user@debian12:~$ podman run -ti --rm --memory=2G --mount=type=bind,source=/var/lib/lxcfs/proc/meminfo,destination=/proc/meminfo centos:stream9
bash-5.1# grep MemTotal /proc/meminfo
MemTotal:        2097152 kB
bash-5.1# cat /sys/fs/cgroup/memory.max
2147483648

The same of course works with all the other proc entries lxcfs provides (cpuinfo, diskstats, loadavg, meminfo, slabinfo, stat, swaps, and uptime here), just bind-mount them.

And yes, free(1) now works too!

bash-5.1# free -m
               total        used        free      shared  buff/cache   available
Mem:            2048           3        1976           0          67        2044
Swap:              0           0           0

Just don't blindly mount the whole /var/lib/lxcfs/proc over the container's /proc. It did work (as in: "bash and free didn't crash") for me, but with /proc/$PID etc missing, I bet things will go south pretty quickly.

by evgeni at June 24, 2025 07:46 PM