/rss20.xml">

Fedora People

Image

Open source trends for 2026

Posted by Ben Cotton on 2026-01-01 12:00:00 UTC

A new year is here and that means it’s time to clean up the confetti from last night’s party. It’s also time for my third annual trend prediction post. After a solid 2024, I did okay-ish in 2025. I am not feeling particularly confident about this year’s predictions in large part because so much depends on the direction of broader economic and political trends, which are far outside my expertise. But this makes a good segue into the first trend on my radar.

Geopolitics fracturing global cooperation

The US government proved to be an unreliable partner in a lot of ways in 2025 and I see little reason that will change in 2026. With capricious policy driven by retribution and self-serving, Europe has become more wary of American tech firms. This has led to efforts to develop a Europe-based tech stack and a greater focus on where data is stored (and what laws govern access to that data). Open source projects are somewhat insulated from this, but there are two areas that we’ll see effects.

First, is that US-based conferences will have an increasingly domestic attendee list. With anecdotes of foreign visitors held in detention for weeks and visa issuance contingent on not saying mean things about the president, it’s little wonder that fewer people are willing to risk travel to the United States. Global projects, like the Python Software Foundation, that have their flagship conference in the US may face financial challenges from a drop in attendance. The Europe versions of Linux Foundation events will be the main version (arguably that’s already true). FOSDEM will strain the limits of its venue, even more than it already does.

The other effect we may see is a sudden prohibition against individuals or nations participating in projects. Projects with US-based backers — whether company or foundation — already have to comply with US sanctions, the Entity List, and other restrictions. It’s conceivable that a nation, company, or individual who upsets the White House will find themselves subject to some kind of ban which could force projects to restrict participation. Whether these restrictions apply to open source is unclear, but I would expect organizations with something to lose to take a cautious approach. Projects with no legal entity will likely take a “how will you stop me?” approach.

A thaw in the job market

This section feels the most precarious, since it depends almost entirely on the macroeconomic conditions and what happens with generative AI. With the latter, I think my prediction of a leveling off in 2025 was just too soon. In 2026, we’ll see more recognition of where generative AI is actually useful and where it isn’t. Companies won’t fire thousands of workers to replace them with AI agents only to discover that the AI is…suboptimal. That’s not to say that AI will disappear, but the approach will be more measured.

With interest rates dropping, companies may feel more confident in trying to grow instead of cutting costs. Supply chain issues and Cyber Resilience Act (CRA) requirements (more on those in a moment) will drive a need for open source expertise specifically. Anecdotally, I’ve seen what seems to be an upward trend in hiring for open source roles in the last part of 2025 and I think that continues in 2026. It won’t be the huge growth we saw in the early part of the decade, but it will be better than the terrible job market we’ve seen in the last year or two.

Supply chain and compliance

Oh look: “software supply chain” is on my trends list. That’s never happened before, except for every time. It won’t stop being an issue in 2026, though. Volunteer maintainers will continue to say “I am not a supplier” as companies make ever increasing demands for information and support. September 11 marks the first significant deadline; companies must have a mechanism for reporting active vulnerabilities. This means they’ll be pushing on their upstream projects for that information.

Although open source projects don’t have obligations under the CRA, they’ll have an increased request burden to deal with. Unfortunately, I think this means that developing a process for dealing with the request deluge may distract from efforts to improve the project’s security. It may also drive more maintainers to give up.

This post’s featured photo by Jason Coudriet on Unsplash.

The post Open source trends for 2026 appeared first on Duck Alignment Academy.

Image

johnnycanencrypt 0.18.0 released

Posted by Kushal Das on 2025-12-31 14:15:04 UTC

A few weeks ago I released Johnnycanencrypt 0.18.0. It is a Python module written in Rust, which provides OpenPGP functionality including allows usage of Yubikey 4/5 as smartcards.

This release was in response of CVE-2025-67897 against sequoia-pgp. This forced me to update to the latest 2.1.0 release of Sequoia.

Image

Reviewing open source trends in 2025

Posted by Ben Cotton on 2025-12-31 12:00:00 UTC

It’s the end of the year, which I suppose means it’s time for the now-traditional look at my predictions.

Software supply chain

I was right that this would continue to be an area of interest in the open source world, just as it was in 2024 (and — spoiler alert! — it will be in 2026). I wrote “In 2025, I expect to see a marked split between “hobbyist” and “professional” open source projects.” That’s probably not as true as my ego would like, but I do think we’re trending that direction, in part due to the inequality I address in the next section.

It’s true that supply chain issues have not stopped in 2025. The Shai-Hulud worm spread through the NPM ecosystem in September (with a similar attack in November). Debian images on Docker Hub contained the XZ backdoor more than a year after it was discovered. Phishing attacks spoofing PyPI in July resulted in the compromise of four accounts, allowing the attackers to upload malicious packages.

But the news wasn’t all bad. GitHub rolled out a immutable releases feature that protects against attackers re-tagging previously-good releases with malicious code. crates.io (Rust), npm (Node.js), and NuGet (.NET) added support for trusted publishing. New tools and frameworks came out to help maintainers better understand and address risks, including the OSPS Baseline and Kusari Inspector (disclosure: I am a Kusari employee).

Inequity

This section had two parts. First, I wrote:

I think we’ll see a growing separation between the haves and have-nots. The projects that enterprises see as critical will get funding and effort. The other projects, whether or not they’re actually important to enterprises, will be left to the increasingly scarce efforts of volunteers.

This held true. Two big examples are the temporary pause of the Kubernetes External Secrets Operator project and Nick Wellnhofer resigning as the sole maintainer of libxml2. Both of these were due to a maintenance burden that exceeded the capacity of the maintainers. Josh Bressers found that almost half of npm packages with a million-plus monthly downloads have a single maintainer. This is likely generalizable across all ecosystems, so it’s no surprise that we’d see this. Some in the FFmpeg community took public issue with Google, suggesting the giant should provide more support or stop sending bugs.

The other part of this prediction concerned events:

Events where companies can make sales will do well. Community events will suffer from a lack of sponsorship and attendance due to lack of travel funding. I think we’ll start to see a shift from global events toward regional events in the community space.

I was wrong here, as far as I can tell. US-based events struggled somewhat, in part due to geopolitics, but European events seem to be doing well. Larger community events, from what I gathered, have done well, although the finances are not what they used to be. Smaller events, though, are struggling. DevOpsDays Detroit, as one example, didn’t accept my talk proposal because the conference was shuttered instead. Many of the local and regional events rely on a small number of committed people to keep going. Just like in software projects, these people are getting burnt out.

The general idea of the prediction seems to be holding up well enough. I’ve heard the phrase “K-shaped economy” approximately a million times in financial news this year. The open source world has seen it, too.

Artificial intelligence

I’ll admit to being wrong on this one, too:

If the bubble doesn’t burst this year, the hype at least slows way down…it will lead to a leveling off in AI-generated code and bug report “contributions” as vendors start charging more money for services.

I maintain that my wrongness is more a matter of timing than anything. Generative AI continues to lose money, but the price increases are not here. While some have expressed concerns about the circular dealing in the sector, it seems like the fallout has mostly been contained to Oracle (whose share price is down over 40% since an early-September high) for the time being. The hype may be slowing, but it’s a little hard to say that with certainty just yet. There’s definitely no indication of a slowdown in AI-generated bug reports in curl’s data.

Bar chart of Hackerone reports to the curl project by year. The "likely AI slop" count increased from 2 in 2023 to 6 in 2024 to 37 in 2025.
Bar chart of Hackerone reports to the curl project by year. The “likely AI slop” count increased from 2 in 2023 to 6 in 2024 to 37 in 2025.

Vibe check

I called my 2025 predictions “a little bleak,” and I think the vibe was spot on. One thing that didn’t fit well into any of the prediction categories was the attempt by Synadia to un-contribute NATS to the CNCF. Thankfully, that went nowhere. Unfortunately, so did the careers of many in the industry as job cuts continued at companies large and small.

If 2025 was bleak for you, rest assured that it is almost over. I truly appreciate everyone who has read these posts, bought a copy of Program Management for Open Source Projects, subscribed to the DAA newsletter, or in any other way made my year a little less bleak with your presence. Here’s hoping for an improved 2026!

This post’s featured photo by Agence Olloweb on Unsplash.

The post Reviewing open source trends in 2025 appeared first on Duck Alignment Academy.

Image

Introducing the new bootc kickstart command in Anaconda

Posted by Fedora Magazine on 2025-12-31 08:00:00 UTC
Image

Anaconda installer now supports installation of bootc based bootable container images using the new bootc command. It has supported several types of payload to populate the root file system during installation. These include RPM packages (likely the most widely used option), tarball images you may know from Fedora Workstation, ostree, and rpm-ostree containers. The newest addition to the family, from a couple of weeks ago, is bootc-based bootable containers.

The difference is under the hood

We have added a new bootc kickstart command to Anaconda to support the new feature. This is very similar to the ostreecontainer command that has been present for some time. From the user’s perspective the two are very similar. The main difference, however, is under the hood.

One of the most important setup steps for a deployment is to create a requested partitioning in both cases. When the partitioning is ready, the ostreecontainer command makes Anaconda deploy the image onto the root filesystem using the ostree tool. It also executes the bootupctl tool to install and set up the bootloader. By contrast, with bootc containers installed using the bootc kickstart command, both the filesystem population and bootloader configuration is performed via the bootc tool. This makes the deployment process even more integrated.

The content of the container images used for installation is another difference. The bootc-enabled images are somewhat more versatile. Apart from installation using Anaconda, they provide a self-installing option via the bootc command executed from within a running container.

On the other hand, both options provide you with a way to install an immutable system based on a container image. This option may be useful for particular use cases where regular installation from RPM packages is not desired. This might be due to potentially lower deployment speed or inherent mutability of the resulting system.

A simple how-to

In practice, you’d likely use a custom container with pre-configured services, user accounts and other configuration bits and pieces. However, if you want to quickly try out how the new Anaconda’s feature works, you just need to follow a few simple steps. Starting with a Fedora Rawhide ISO:

First, take an existing container from a registry and create a minimal kickstart file instructing Anaconda to install the bootable container image:

# Beware that this kickstart file will wipe out the existing disk partitions.
# Use it only in an experimental/isolated environment or edit it accordingly!
zerombr
clearpart --all --initlabel
autopart

lang en_US.UTF-8
keyboard us

timezone America/New_York --utc
rootpw changeme

bootc --source-imgref=registry:quay.io/fedora/fedora-bootc:rawhide

As a next step, place the kickstart file in some reachable location (e. g. HTTP server), point Anaconda to it by appending the following on the kernel command line:

inst.ks=http://url/to/kickstart 

Now start the installation.

Alternatively, you may use the mkksiso tool provided by the lorax package to embed the kickstart file into the installation ISO.

When installation and reboot is complete, you are presented with an immutable Fedora Rawhide system. It will be running on your hardware (or VM) installed from a bootable container image.

Is there anything more about bootc in Anaconda?

You may ask if this option is limited to Fedora Rawhide container images. Technically speaking, you can use the Fedora Rawhide installation ISO to install, for instance, a CentOS Stream container image:

bootc --source-imgref=registry:quay.io/centos-bootc/centos-bootc:stream10

Nevertheless, keep in mind that for now Anaconda will handle it as Fedora installation in such a case. This is because it runs from a Fedora Rawhide boot ISO. This may result in unforeseen problems, such as getting a btrfs-based partitioning that CentOS Stream won’t be able to boot from. This particular issue is easily overcome by explicitly telling Anaconda to use some different partitioning type, e. g. autopart –fstype=xfs. We would like to address the lack of container images handling based on the contained operating system or flavour in the future. For now, one just needs to take the current behavior into consideration when using the bootc command.

There are a couple more known limitations in Anaconda or bootc at this point in time. These include lack of support for partitioning setups spanning multiple disks, support for arbitrary mount points, or for installation from authenticated registries. But we hope it won’t take long to solve those shortcomings. There are also plans to make the new bootc command available even on the RHEL-10 platform.

We invite you to try out this new feature and share your experience, ideas or comments with the Installer team. We are looking forward to hearing from you in a thread on discussion.fedoraproject.org!

Image

Arsenal Math font

Posted by Rajeesh KV on 2025-12-30 12:30:22 UTC

After my talk about TeX syntax-highlighting font at TUG2025 conference, then vice-president of TeX Users Group, Boris Veytsman approached me with a proposal to develop a Math counterpart for the beautiful Arsenal font designed by Andrij Shevchenko.

What followed was a deep dive into the TeXbook to learn about math font parameters, OpenType Math specification, and related documentation & resources. Fortunately, FontForge has really good support for editing Math tables; and the base font used (KpMath-Sans by Daniel Flippo) already had all the critical parameters set (which needed slight adjustments). I started the development of Arsenal Math by integrating the glyphs for Latin letters, Arabic numerals, some symbols etc. and with proper scaling & stem thickness corrections, for regular, bold, italic and bolditalic variants, plus math calligraphic letters. In addition, a lot of Math kerning (known as ‘cut-ins’ in OpenType parlance) was added to improve the spacing.

Image
Fig. 1: Arsenal Math specimen, contributed by CVR.

Being an OpenType font — XeTeX, LuaTeX or some Unicode math typesetting engine (e.g. MS Word) is required to use Arsenal Math. Boris did testing and provided many feedback, and Vaishnavy Murthy graciously reviewed the glyph changes I made. The CTAN admins were quite helpful to get the font accepted into the repository. There is a style file and fontspec file supplied with the fonts to make the usage easy. The sources are available at RIT fonts repository.

Boris also donated funding for the project, but he had already paid me many folds be mailing The TeXbook autographed by Donald Knuth for me, so I requested the LaTeX devfund team to use it for another project. Karl Berry suggested to write an article about the development process, which is published in the issue 46:3 of the TUGboat journal, and has a lot more technical details.

Image
Fig. 2: The TeXbook autographed by Don Knuth for me.

The learning experience of Math typesetting internals, and contributing to the TeX ecosystem has been a fulfilling spare-time work for me in 2025. Many thanks to all those involved!

Image

Pourquoi j’ai quitté Kodi pour Wholphin

Posted by Guillaume Kulakowski on 2025-12-29 18:05:33 UTC

J’utilise Kodi depuis 2012, à l’époque où le projet s’appelait encore XBMC. Ma toute première installation tournait sur un Raspberry Pi sous Raspbian. Les médias étaient stockés sur mon PC, puis mon NAS et partagés via Samba (SMB pour les intimes). Un setup simple, efficace. Avec l’arrivée des Smart TV, j’ai fini par remplacer le […]

Cet article Pourquoi j’ai quitté Kodi pour Wholphin est apparu en premier sur Guillaume Kulakowski's blog.

Image

a look back at 2025

Posted by Kevin Fenzi on 2025-12-27 19:59:14 UTC
Scrye into the crystal ball

2025 is almost gone now, so I thought I would look back on the year for some interesting high and low lights. I'm sure to miss some things here, or miss mentioning someone who did a bunch of work on something, but I will do my best.

Datacenter moves

There was one gigantic Datacenter move, and one smaller one. Overall I am glad we moved and we are in a much better situation for doing so, but I really hope we can avoid moving more in 2026. It takes so much planning and energy and work and there is so much else to do that has to be put on hold.

As a reminder, some of those advantages:

  • We have lots of new machines. They are newer/faster/better in almost every way.

  • dual 25G links on all the new machines (and 10G on old ones)

  • all nvme storage in new machines

  • space to expand for things like riscv builders and such.

  • ipv6 support finally

So much of my year was datacenter moves. :(

Scrapers and outages

This year was sadly also the year of the scrapers. They are hammering pretty much everyone these days and it's quite sad. We did deploy anubis and it really helped a lot for most of the scrapers, but the's another group of them it wasn't able to. For those before the holidays I just tossed enough resources at our stuff that they can scrape and we can just not care. I'm not sure what next year will look like for this however, so we will just keep doing the best we can. I did adjust caching some that also really helped (all the src static files are cached now).

There were also a number of outages toward the end of the year, which I really am not happy about. There were a number of reasons for them:

  • A tcp_timeout issue which turned out to be a firewall bug that was super hard to track down.

  • The scrapers causing outages.

  • I myself caused a few outages with a switching loop of power10 lpars. ;(

  • Various smaller outages.

We have dicusssed a bunch of things to improve outages and preventing them, so hopefully next year will be happier on that front.

Power10

Speaking of power10, that was quite the saga. We got the machines, but the way we wanted to configure them didn't end up working so we had to move to a much more complex setup using a virtual hardware management console appliance and lpars and sr-iov and more. It's pretty complex, but we did get everything working in the end.

Fedora releases

We got Fedora 42 and 43 released this year, and pretty much on time too. 43 seems to be a release with a lot of small issues sadly, not sure why. From the postgresql upgrades, dovecot changing config format completely, nftables not enabled and matrix-synapse not being available, my home upgrades were not as smooth as usual.

Home Assistant

This was defintely a fun year of poking at home assistant and adding sensors and tweaking around with it. It's a nice fun hobby and does give you real data to solve real problems around your house. Also, all your data is your own and stored locally. This has really turned my perception of iot things all around. Before I woulde deliberately not connect things, now I connect them if they can be made only to talk to my home assistant.

I added a weather station, a rain guage, a new zigbee controller, a bunch of smart power plugs and temp sensors, and much more. I expect more on the way in 2026. Just when I think I have automated or instermented everything, there's a new thing coming along.

AI

I'm still in the 'There are in fact use cases for LLM's' group, but I am pretty weary of all the people wedging them in where they are not in fact a good use case, or insisting you find _some_ case no matter what.

I've found some of them useful for some things. I think this will continue to grow over time, but I think we need to be measured.

On the other side I don't get the outrage for things like calibre adding some support for LLM's. Its there, but it does exactly nothing by default. You have to set it up with your desired provider before it will do anything. It really doesn't affect you if you don't want to use it.

laptop

I have stuck with using my Lenovo slim 7x as my main laptop for most of this year. The main thing I still miss is webcam working (but I have an external one so it's not the end of the world). I'm looking forward to the X2 laptops out in the next few months. I really hope qualcomm has learned from the X1 ones and X2 will go better, but time will tell.

Fedmsg finally retired

We finally managed to turn off our old message bus. It took far too long, but I think it went pretty smoothly overall in the end. Thanks to much to Michal Konečný for doing everything around this.

nagios (soon to be) retired

Thanks to a bunch of work from Greg Sutcliffe, we have our zabbix setup pretty much done for a phase one and have officially announced that nagios is going to be retired.

iptables finally retired

We moved all our iptables setup over to nftables. There were a few hiccups, but overall it went pretty smoothly. Thanks to James Antill for all the work on this.

Blogs

I wrote a bunch more blogs this year, mostly for weekly recaps, but also a few home assistant review blogs. I find it enjoyable to do the recaps, although I don't really get much in the way of comments on them, so no idea if anyone else cares about them. I'll probibly continue to do them in 2026, but I might change it to do them sunday night or friday so I don't have to think about doing them saturday morning.

The world

The world was very depressing in 2025 in general, and thats speaking as someone living life on the easiest difficulty level ( https://whatever.scalzi.com/2012/05/15/straight-white-male-the-lowest-difficulty-setting-there-is/ ) I really hope sanity, science and kindness can make some recovery in 2026.

I'll probibly see about doing a 'looking forward to 2026' post soon.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115794282914919436

Image

A font with built-in TeX syntax highlighting

Posted by Rajeesh KV on 2025-12-27 07:47:38 UTC

At the TUG2025 conference, I presented a talk about the development of a new colour font, which does automatic syntax highlighting for TeX documents/snippets. The idea was floated by CVR, and was inspired by a prior-art of HTML/CSS syntax highlighting font by Heikki Lotvonen.

Syntax highlighting is achieved by specialized grammar files or packages on desktop applications, code editors, the Web, and typesetting systems like TeX. Some of these tools are heavy (e.g. prism.js or pygmentize package). A light-weight alternative would be a font that uses recent OpenType technologies to do syntax highlighting of code snippets. I developed such a font, for highlighting TeX code snippets.

Image
Fig. 1: OpenType colour font doing syntax highlighting of TeX document.

There are some novelties in the developed font:

  1. It supports both COLRv0 and COLRv1 colour format specifications (separate fonts, but generated from the same source).
  2. Supports plain TeX, LaTeX2 and LaTeX3 macro names.
  3. A novel set of OpenType shaping rules for TeX syntax colouring.

The base font used is M+ Code Latin by Coji Morishita. The details of the development, use cases, and limitations can be found in the 46:2 issue of the TUGboat journal publication. The binary font and sources are available at RIT fonts repository.

Image

Browser wars

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Browser wars


brown fox on snow field

Photo source: Ray Hennessy (@rayhennessy) | Unsplash


Last week in Rijeka we held Science festival 2015. This is the (hopefully not unlucky) 13th instance of the festival that started in 2003. Popular science events were organized in 18 cities in Croatia.

I was invited to give a popular lecture at the University departments open day, which is a part of the festival. This is the second time in a row that I got invited to give popular lecture at the open day. In 2014 I talked about The Perfect Storm in information technology caused by the fall of economy during 2008-2012 Great Recession and the simultaneous rise of low-cost, high-value open-source solutions. Open source completely changed the landscape of information technology in just a few years.

Image

The follow-up

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

The follow-up


people watching concert

Photo source: Andre Benz (@trapnation) | Unsplash


When Linkin Park released their second album Meteora, they had a quote on their site that went along the lines of

Musicians have their entire lives to come up with a debut album, and only a very short time afterward to release a follow-up.

Image

Open-source magic all around the world

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Open-source magic all around the world


woman blowing sprinkle in her hand

Photo source: Almos Bechtold (@almosbech) | Unsplash


Last week brought us two interesting events related to open-source movement: 2015 Red Hat Summit (June 23-26, Boston, MA) and Skeptics in the pub (June 26, Rijeka, Croatia).

Image

Joys and pains of interdisciplinary research

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Joys and pains of interdisciplinary research


white and black coffee maker

Photo source: Trnava University (@trnavskauni) | Unsplash


In 2012 University of Rijeka became NVIDIA GPU Education Center (back then it was called CUDA Teaching Center). For non-techies: NVIDIA is a company producing graphical processors (GPUs), the computer chips that draw 3D graphics in games and the effects in modern movies. In the last couple of years, NVIDIA and other manufacturers allowed the usage of GPUs for general computations, so one can use them to do really fast multiplication of large matrices, finding paths in graphs, and other mathematical operations.

Image

What is the price of open-source fear, uncertainty, and doubt?

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

What is the price of open-source fear, uncertainty, and doubt?


turned on red open LED signage

Photo source: j (@janicetea) | Unsplash


The Journal of Physical Chemistry Letters (JPCL), published by American Chemical Society, recently put out two Viewpoints discussing open-source software:

  1. Open Source and Open Data Should Be Standard Practices by J. Daniel Gezelter, and
  2. What Is the Price of Open-Source Software? by Anna I. Krylov, John M. Herbert, Filipp Furche, Martin Head-Gordon, Peter J. Knowles, Roland Lindh, Frederick R. Manby, Peter Pulay, Chris-Kriton Skylaris, and Hans-Joachim Werner.

Viewpoints are not detailed reviews of the topic, but instead, present the author's view on the state-of-the-art of a particular field.

The first of two articles stands for open source and open data. The article describes Quantum Chemical Program Exchange (QCPE), which was used in the 1980s and 1990s for the exchange of quantum chemistry codes between researchers and is roughly equivalent to the modern-day GitHub. The second of two articles questions the open-source software development practice, advocating the usage and development of proprietary software. I will dissect and counter some of the key points from the second article below.

Image

On having leverage and using it for pushing open-source software adoption

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

On having leverage and using it for pushing open-source software adoption


Open 24 Hours neon signage

Photo source: Alina Grubnyak (@alinnnaaaa) | Unsplash


Back in late August and early September, I attended 4th CP2K Tutorial organized by CECAM in Zürich. I had the pleasure of meeting Joost VandeVondele's Nanoscale Simulations group at ETHZ and working with them on improving CP2K. It was both fun and productive; we overhauled the wiki homepage and introduced acronyms page, among other things. During a coffee break, there was a discussion on the JPCL viewpoint that speaks against open-source quantum chemistry software, which I countered in the previous blog post.

But there is a story from the workshop which somehow remained untold, and I wanted to tell it at some point. One of the attendants, Valérie Vaissier, told me how she used proprietary quantum chemistry software during her Ph.D.; if I recall correctly, it was Gaussian. Eventually, she decided to learn CP2K and made the switch. She liked CP2K better than the proprietary software package because it is available free of charge, the reported bugs get fixed quicker, and the group of developers behind it is very enthusiastic about their work and open to outsiders who want to join the development.

Image

AMD and the open-source community are writing history

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

AMD and the open-source community are writing history


a close up of a cpu chip on top of a motherboard

Photo source: Andrew Dawes (@andrewdawes) | Unsplash


Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on Freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on the 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open-source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in the graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open-source drivers for ATi/AMD GPUs, I suggest the well-written reminiscence on Phoronix.

Image

I am still not buying the new-open-source-friendly-Microsoft narrative

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

I am still not buying the new-open-source-friendly-Microsoft narrative


black framed window

Photo source: Patrick Bellot (@pbellot) | Unsplash


This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge's JavaScript engine last month and a whole bunch of projects before that.

Even though the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

All the projects they have open-sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft's formats and, in turn, accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open-source community. What follows are three steps Microsoft could take in that direction.

Image

Free to know: Open access and open source

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Free to know: Open access and open source


yellow and black come in we're open sign

Photo source: Álvaro Serrano (@alvaroserrano) | Unsplash


!!! info Reposted from Free to Know: Open access & open source, originally posted by STEMI education on Medium.

Q&A with Vedran Miletić

In June 2014, Elon Musk opened up all Tesla patents. In a blog post announcing this, he wrote that patents "serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors." In other words, he joined those who believe that free knowledge is the prerequisite for a great society -- that it is the vibrancy of the educated masses that can make us capable of handling the strange problems our world is made of.

The movements that promote and cultivate this vibrancy are probably most frequently associated with terms "Open access" and "open source". In order to learn more about them, we Q&A-ed Vedran Miletić, the Rocker of Science -- researcher, developer and teacher, currently working in computational chemistry, and a free and open source software contributor and activist. You can read more of his thoughts on free software and related themes on his great blog, Nudged Elastic Band. We hope you will join him, us, and Elon Musk in promoting free knowledge, cooperation and education.

Image

The academic and the free software community ideals

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

The academic and the free software community ideals


book lot on black wooden shelf

Photo source: Giammarco Boscaro (@giamboscaro) | Unsplash


Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and Unicode posted on some mailing list related to the free and open-source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Image

Celebrating Graphics and Compute Freedom Day

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Celebrating Graphics and Compute Freedom Day


stack of white and brown ceramic plates

Photo source: Elena Mozhvilo (@miracleday) | Unsplash


Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through social media. Open-source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

Image

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2


an old padlock on a wooden door

Photo source: Arkadiusz Gąsiorowski (@ambuscade) | Unsplash


Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance (archived elsewhere), and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, albeit using a self-signed certificate. However, with Let's Encrpyt coming into public beta, we decided to join the movement to HTTPS.

Image

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Why we use reStructuredText and Sphinx static site generator for maintaining teaching materials


open book lot

Photo source: Patrick Tomasso (@impatrickt) | Unsplash


Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn't have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and the disadvantages of static site generators, when compared to content management systems, have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Image

Fly away, little bird

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Fly away, little bird


macro-photography blue, brown, and white sparrow on branch

Photo source: Vincent van Zalinge (@vincentvanzalinge) | Unsplash


The last day of July happened to be the day that Domagoj Margan, a former student teaching assistant and a great friend of mine, set up his own DigitalOcean droplet running a web server and serving his professional website on his own domain domargan.net. For a few years, I was helping him by providing space on the server I owned and maintained, and I was always glad to do so. Let me explain why.

Image

Mirroring free and open-source software matters

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Mirroring free and open-source software matters


gold and silver steel wall decor

Photo source: Tuva Mathilde Løland (@tuvaloland) | Unsplash


Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that's used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that's transparent to the user; when using a mirror, the user will see explicitly which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open-source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts'o on MIT's FTP server. The GNU Project's history contains an analogous process of making local copies of the software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.

Image

Markdown vs reStructuredText for teaching materials

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Markdown vs reStructuredText for teaching materials


blue wooden door surrounded by book covered wall

Photo source: Eugenio Mazzone (@eugi1492) | Unsplash


Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText to produce teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was general praise for generating static HTML files from Markdown or reStructuredText.

This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn't quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of the group website and fixing a myriad of inconsistencies in the writing style that accumulated over the years.

Image

Don't use RAR

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Don't use RAR


a large white tank

Photo source: Tim Mossholder (@ctimmossholder) | Unsplash


I sometimes joke with my TA Milan Petrović that his usage of RAR does not imply that he will be driving a rari. After all, he is not Devito rapping^Wsinging Uh 😤. Jokes aside, if you search for "should I use RAR" or a similar phrase on your favorite search engine, you'll see articles like 2007 Don't Use ZIP, Use RAR and 2011 Why RAR Is Better Than ZIP & The Best RAR Software Available.

Image

Should I do a Ph.D.?

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Should I do a Ph.D.?


a bike is parked in front of a building

Photo source: Santeri Liukkonen (@iamsanteri) | Unsplash


Tough question, and the one that has been asked and answered over and over. The simplest answer is, of course, it depends on many factors.

As I started blogging at the end of my journey as a doctoral student, the topic of how I selected the field and ultimately decided to enroll in the postgraduate studies never really came up. In the following paragraphs, I will give a personal perspective on my Ph.D. endeavor. Just like other perspectives from doctors of not that kind, it is specific to the person in the situation, but parts of it might apply more broadly.

Image

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

Alumni Meeting 2023 at HITS and the reminiscence of the postdoc years


a fountain in the middle of a town square

Photo source: Jahanzeb Ahsan (@jahan_photobox) | Unsplash


This month we had Alumni Meeting 2023 at the Heidelberg Institute for Theoretical Studies, or HITS for short. I was very glad to attend this whole-day event and reconnect with my former colleagues as well as researchers currently working in the area of computational biochemistry at HITS. After all, this is the place and the institution where I worked for more than half of my time as a postdoc, where I started regularly contributing code to GROMACS molecular dynamics simulator, and published some of my best papers.

Image

My perspective after two years as a research and teaching assistant at FIDIT

Posted by Vedran Miletić on 2025-12-26 12:51:09 UTC

My perspective after two years as a research and teaching assistant at FIDIT


human statues near white building

Photo source: Darran Shen (@darranshen) | Unsplash


My employment as a research and teaching assistant at Faculty of Informatics and Digital Technologies (FIDIT for short), University of Rijeka (UniRi) ended last month with the expiration of the time-limited contract I had. This moment has marked almost two full years I spent in this institution and I think this is a good time to take a look back at everything that happened during that time. Inspired by the recent posts by the PI of my group, I decided to write my perspective on the time that I hope is just the beginning of my academic career.

Image

How quickly should you fix vulnerabilities?

Posted by Ben Cotton on 2025-12-24 12:00:00 UTC

As quickly as possible, right? Maybe not. In a well-reasoned post on the PivotNine blog, Justin Warren wrote:

I would encourage more unsupported maintainers to do just that. Stop rushing to fix bugs for people without a support contract. Patch security flaws at a more leisurely pace unless someone is willing to pay for greater urgency. Take your time and enjoy your hobby more, since that is what unpaid software maintenance is. … Businesses and governments need to get used to the idea that you are not part of their “software supply chain” unless they are a paying customer.

This is part of the broader “I am not a supplier” sentiment that has built in the last few years. I am sympathetic to this position. Open source software is, after all, almost always provided as-is with no guarantees of quality or fitness for purpose. The only obligations on maintainers are the ones they choose to accept for themselves.

I am generally all for maintainers choosing the “fuck you, pay me” route if that’s what they want. At the same time, it feels like a bad approach to addressing vulnerabilities. Security flaws don’t just affect the big companies who should invest in their upstreams more than they do (on average). They also affect the users of the big company software, as well as other hobbyists and random users.

These vulnerabilities can have significant impact on the finances, data, and privacy of real people. To know about a vulnerability and choose to let it sit until someone comes along with money feels anti-social to me. Whether hobbyist or professional, developers try to write good code. It’s reasonable to not hold hobbyists (even if they’re professionals during the workday) to a high standard. But the social contract that no one has ever explicitly stated says that we should try not to write vulnerabilities and that we should make a good-faith effort to fix them when they’re raised.

Warren seems to advocate breaking this social contract, since no one agreed to it in the first place. The idea that parties must knowingly accept the terms of a contract is a pretty foundational pillar of contract law. Social contracts, of course, are not legally binding, but the implication is there.

And this brings me to my slight disagreement with what Warren wrote. The wrong time to say you’re not going to be fixing a vulnerability without a support contract is when someone submits the report. The right time is in your README or other project documentation, right up front. This — in theory, although there will always be people who won’t see it — gives people the information they need to make an informed decision about whether or not your project is appropriate for their use. People will assume you’ll try to fix vulnerabilities quickly, unless you clearly set an expectation otherwise.

This post’s featured photo by FlyD on Unsplash.

The post How quickly should you fix vulnerabilities? appeared first on Duck Alignment Academy.

Image

Integrating the NOUS E10 ZigBee Smart CO₂, Temperature & Humidity Detector with ZHA

Posted by Brian (bex) Exelbierd on 2025-12-23 14:30:00 UTC

My friend Tomáš recently gave me a NOUS E10 ZigBee Smart CO₂, Temperature & Humidity Detector. It is a compact ZigBee device that, on paper, integrates with Home Assistant. However, as is often the case with smart home hardware, the reality is slightly more nuanced. Home Assistant offers two primary ways to integrate Zigbee devices: Zigbee2MQTT and ZHA (Zigbee Home Automation). I started out with ZHA when I first installed Home Assistant. There is no way, as far as I know, to migrate between the two without re-adding all of your devices, so, 25 (now 26) devices in, I am on team ZHA. While the NOUS E10 was already fully supported in Zigbee2MQTT, it was not functional in ZHA.

NOUS E10 ZigBee Smart CO₂, Temperature & Humidity Detector Home Assistant CO₂, Temperature & Humidity screenshot
Capturing the photo and the screenshot simultaneously without breathing on the sensor is hard; glossy surfaces are tricky to photograph, so slight value drift between the sensor and UI is expected.

The Tuya Rebrand Rabbit Hole

I did some reading and it seemed that between what the folks who did the Zigbee2MQTT integration figured out and the fact that the device is really a Tuya rebranded object in disguise, writing the integration should be achievable with my level of skill and general coding/technical experience. Tuya is a massive OEM (Original Equipment Manufacturer) that produces a vast array of smart home devices sold under hundreds of different brand names, so while the devices vary, the overall concept is fairly well understood.

The challenge with Tuya devices is that they often use a proprietary Zigbee cluster to communicate data. Instead of using the standard Zigbee clusters for temperature or humidity, they wrap everything in their own protocol. To make these devices work in ZHA, you need a “quirk.” A quirk is essentially a Python-based translator that tells ZHA how to interpret these non-standard messages and map them to the standard Home Assistant entities.

Developing the Quirk with AI

Because Tuya devices and the quirk concepts are fairly well understood this is a great use case for an LLM model. I did some ideating with Google Gemini and plugged in all the values I could find from the Zigbee2MQTT source code and the device’s own signature. Using an LLM for this was surprisingly effective - it helped me scaffold the Python classes and identify which Tuya data points mapped to which sensors. Honestly, all it got wrong was guessing that values were reported as deciunits (value times 10, i.e. 21.1 is reported as 211) when for this specific device, values are reported directly.

However, I hit multiple challenges, centered on this device not seeming to ever throw debug data. Usually, when you are developing a quirk, you can watch the Home Assistant logs to see the raw Zigbee frames coming in. You look for the “magic numbers” that change when you breathe on the sensor (CO₂). For some reason, the NOUS E10 was incredibly quiet. It took a lot of trial and error - and several restarts of the Zigbee network - to finally see the data flowing correctly. Eventually, I had a functional quirk that correctly reported CO₂ levels, temperature, and humidity.

Contributing to the Ecosystem

If you write a quirk, you’re encouraged to contribute it to the Zigpy ZHA Device Handlers Repository. This is the central hub for all ZHA quirks, and once a quirk is merged there, it eventually makes its way into a standard Home Assistant release. I worked on a basic test case, and cleaned up my code to match the code standards and general concepts used in similar quirks.

I have submitted this pull request and I’m waiting for feedback. I’m expecting to need to make corrections as this is my first time doing this kind of a contribution. While I have validated that the code works in my own environment, “working” and “ready for contribution” are not always the same thing. There are coding standards, naming conventions, and architectural patterns that the maintainers (rightly) insist upon to keep the codebase maintainable.

How to Use the Quirk Today

If you happen to have one of these and you use ZHA in Home Assistant, you can use the quirk right now without waiting on the merge. To do this, you need to save the actual python code in a custom quirks directory in your Home Assistant install. Typically, you would use /config/zha_quirks.

After you do that, update your configuration.yaml to add the quirk directory as follows:

zha:
  custom_quirks_path: /config/zha_quirks/

Then restart Home Assistant, pair your device, and, as a different friend would say, “Robert is your father’s brother.” It is a small but satisfying victory to take a non-working device and make it fully functional through a bit of code and community knowledge and advice.

Image

NanoKVM: I like it

Posted by Jonathan McDowell on 2025-12-22 17:38:00 UTC

I bought a NanoKVM. I’d heard some of the stories about how terrible it was beforehand, and some I didn’t learn about until afterwards, but at £52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment.

Let’s cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kovačič wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later.

Next, let me explain where I’m coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from “reputable” vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here.

And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd’s concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a £52 device based on a development board. You’re giving it USB + HDMI access to a host on your network, if you’re worried about the microphone then you’re concentrating on the wrong bit here.

I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have any external access, and it copes happily with being accessed as http://localhost:8000/ (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session).

IPv6 is enabled in the kernel. My test setup doesn’t have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically.

My device reports:

Image version:              v1.4.1
Application version:        2.2.9

That’s recent, but the GitHub releases page has 2.3.0 listed as more recent.

Out of the box it’s listening on TCP port 80. SSH is not running, but there’s a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you’d expect from port 80, is HTTP, but there’s a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN localhost but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL.

As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for cdn.sipeed.com. The DNS servers are hard coded:

~ # cat /etc/resolv.conf
nameserver 192.168.0.1
nameserver 8.8.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114
nameserver 119.29.29.29
nameserver 223.5.5.5

This is actually restored on boot from /boot/resolv.conf, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of pool.ntp.org services in /etc/ntp.conf (this does not get restored on reboot, so can just be edited in place). I had dnsmasq on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually udhcpc does write the DNS details to /etc/resolv.conf.dhcp).

My assumption is the lookup to cdn.sipeed.com is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a .so download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I’ve not bothered digging further into this. I did go grab the latest.zip being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub.

I note there’s an iptables setup (with nftables underneath) that’s not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it’s not going to try and connect out somewhere it shouldn’t.

It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based:

~ # cat /etc/os-release
NAME=Buildroot
VERSION=-g98d17d2c0-dirty
ID=buildroot
VERSION_ID=2023.11.2
PRETTY_NAME="Buildroot 2023.11.2"

The kernel reports itself as 5.10.4-tag-. Somewhat ancient, but actually an LTS kernel. Except we’re now up to 5.10.247, so it obviously hasn’t been updated in some time.

TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it’s not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel.

The SSH client/daemon is full-fat OpenSSH:

~ # sshd -V
OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023

There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I’m a little surprised at aircrack (the device has no WiFi and even though I know there’s a variant that does, it’s not a standard debug tool the way tcpdump is), but there’s a copy of GNU Chess in there too, so it’s obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here.

Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I’d have to open up the device to get to UART0. I’ve not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier).

In terms of actual functionality it did what I’d expect. 1080p HDMI capture was fine. I’d have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the | symbol until I reconfigured the target machine not to expect a GB layout.

There’s also the potential to share out images via USB. I copied a Debian trixie netinst image to /data on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There’s also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn’t try that. There’s plenty of room for images:

~ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mmcblk0p2            7.6G    823.3M      6.4G  11% /
devtmpfs                 77.7M         0     77.7M   0% /dev
tmpfs                    79.0M         0     79.0M   0% /dev/shm
tmpfs                    79.0M     30.2M     48.8M  38% /tmp
tmpfs                    79.0M    124.0K     78.9M   0% /run
/dev/mmcblk0p1           16.0M     11.5M      4.5M  72% /boot
/dev/mmcblk0p3           22.2G    160.0K     22.2G   0% /data

The NanoKVM also appears as an RNDIS USB network device, with udhcpd running on the interface. IP forwarding is not enabled, and there’s no masquerading rules setup, so this doesn’t give the target host access to the “management” LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image.

One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I’d been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help.

As I stated at the start, I’m happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn’t do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I’ve seen says I’ve got my money’s worth from it.

Image

EU OS: Which Linux Distribution fits Europe best?

Posted by Robert Riemann on 2025-12-21 10:23:00 UTC

Logos of Fedora, openSUSE, Ubuntu and Debian surrounding the Logo of EU OS

Please note that views expressed in this post (and this blog in general) are only my own and not of my employer.

Dear opensuse planet, dear fedora planet, dear fediverse, dear colleagues,

Soon, the EU OS project celebrates its first anniversary. I want to seize the occasion (and your attention) before the Christmas holidays to share my personal view on the choice of the Linux distribution as a basis for EU OS. EU OS is so far a community-led initiative to offer a template solution for Linux on the Desktop deployments in corporate settings, specifically in the public sector.

Only few weeks ago, the EU OS collaborators tested together a fully functional Proof of Concept (PoC) with automatic provisioning of laptops and central user management. The documentation of this setup is to 90% complete and should be finalized in the coming weeks. This PoC relies on Fedora, which is the one aspect that triggered the most attention and criticism so far.

I recall that EU OS has so far no funding and only few contributors. Please check out the project Gitlab, join the Matrix channel, or send me an email to help or discuss funding. So in my view, EU OS can currently accomplish its mission best by bringing communities and organisations together to use their existing resources more strategically than now.

In 2025, digital sovereignty was much discussed in Europe. I had many opportunities to discuss EU OS with IT experts in the public sector. I am hopeful that eventually one or several European projects will emerge to bring Linux on the (public sector) Desktop more systematically as it is currently the case.

I also learnt more about public sector requirements for Linux on the server, in the VM, in Kubernetes. If the goal of EU OS is to leverage synergies with Cloud Native Computing technologies, those requirements must be considered as well by the Linux distribution powering EU OS.

Linux Use in the Public Sector

Let us map out briefly the obvious use cases of Linux in a public sector organisation, such as a ministry, a court, or the administration of a city/region. The focus is on uses that are directly managed1.

  • Linux on the desktop (rarely the case today, but that’s the ambition of the EU OS project)
  • Linux in a Virtual Machine (VM), a Docker/Podman Container, and for EU OS in a Flatpak Runtime
  • Linux on the server (including for Virtualisation/Kubernetes nodes)

Criteria for a Linux Distribution in the Public Sector

Given the exchanges I had so far, I would propose the following high-level criteria for the selection of a Linux Distribution:

Battle-tested Robustness
The public sector is very conservative and any change to the status-quo requires clear unique benefits.
Cloud Native Technology
The public sector reacted so far very positively to the promises of bootc technology (bootc in the EU OS FAQ). It is very recent technology, but the benefits for the management of Linux laptop fleets with teams knowing already container technology are recognised.
Enterprise Support
The public sector wants commercial support for a free Linux with an easy upgrade path to a managed enterprise Linux. Already existing companies, new companies, the public sector, or non-profit foundations could deliver such enterprise Linux. I expect that a mix with clear task allocations would work best in practice.
Enterprise Tools
The public sector needs tools for provisioning, configuration and monitoring of servers, VMs, Docker/Podman Containers, and laptops as well as for the management of users. Those tools must scale up to some ten thousands laptops/users. The EU OS project proposes to rely on FreeIPA for identity management and Foreman for Linux laptop fleet management.
Third-Party Support
The public sector wants that their existing possibly proprietary or legacy third-party hardware2 or appliances (think SAP) remain supported. This one is tricky, because it is each third party that decides what they support. Of course, any third-party vendor lock-in should be avoided eventually, but this takes time and some vendor lock-ins are less problematic than others.
Supply Chain Security and Consistency
The public sector must secure its supply chains. This becomes generally easier with less chains to secure. A Linux desktop based on Fedora and KDE requires about 1200 Source RPM packages3. A Linux server based on Fedora requires about 300 Source RPM packages. The flatpak runtime org.fedoraproject.KDE6Platform/x86_64/f43 requires about 100 Source RPM packages. I assume the numbers for Ubuntu/Debian/openSUSE are similar. So instead of securing all supply chains independently (possibly through outsourcing), the public sector can choose one, secure this one, and cover several use cases with the same packages at no or significant less extra effort. Also updates, testing, and certifications of those packages would benefit then all use cases.
Accreditation and Certifications
Some public sector organisations require a high level of compliance with cyber security, data protection, accessibility, records keeping, interoperability, etc. The more often a (similar) Linux distribution has passed such tests, the easier it should get.
Forward-looking Sovereignty and Sustainability
The public sector wants to work with stable vendors in stable jurisdictions that minimise the likelihood to interfere with the execution of its public mandate4. Companies can change ownership and jurisdiction. While not a bullet-proof solution, a multi-stakeholder non-profit organisation can offer more stability and alignment with public sector mandates. Such an organisation must then receive the resources to execute its mandate continuously over several years or decades. With several independent stakeholders, public tenders become more competitive and as such more meaningful (compare with procurement of Microsoft Windows 11).

Geographical Dimension

I have the impression that some governments would like to choose a Linux distribution that (a) local IT companies can support and (b) creates jobs in their country or region. In my view the only chance to offer such advantage while maintaining synergies across borders is to find a Linux distribution supported by IT companies active in many countries and regions.

While the project EU OS has EU in its name, I would be in favour to not stop at EU borders when looking for partners and synergies. It has already inspired MxOS in Mexico. Then, think of international organisations like OSCE, Council of Europe, OECD, CERN, UN (WHO, UNICEF, WFP, ICJ), ICC, Red Cross, Doctors Without Borders (MSF), etc. Also think of NATO. Those organisations are active in the EU, in Europe and in most other countries of the world. So if EU OS can rely on and stimulate investments in a Linux distribution that is truly an international project, international organisations would benefit likewise while upholding their mandated neutrality.

Diversity of Linux Distributions for EU OS

Douglas DeMaio (working for SUSE doing openSUSE community management) argues in his blog post from March 2025: Freedom Does Not Come From One Vendor. The motto of the European Union is ‘United in Diversity’. Diversity and decentralisation make systems more robust. However, when I see the small scale of on-going pilots, I find that as of December 2025, it is better to unify projects and choose one single Linux distribution to start with and progress quickly. EU OS proposes to achieve immutability with bootable containers (bootc). This is a cross-distribution technology under the umbrella of the Cloud Native Computing Foundation that makes switching Linux distributions later easier. Other Linux distributions could meanwhile implement bootc, FreeIPA, as well as Foreman support, and setup/grow their multi-stakeholder non-profit organisation, possibly with support of public funds they applied for.

The extend to which more Linux distributions in the public sector provide indeed more security requires an in-depth study. For example, consider the xz backdoor from 2024 (CVE-2024-3094).

Vendor Status
Redhat/Fedora/AlmaLinux Fedora Rawhide and 40 beta affected, RHEL and Almalinux unaffacted
SUSE/openSUSE openSUSE Tumbleweed and MicroOS affected, SUSE Linux Enterprise unaffected
Debian Debian testing and unstable affected
Kali Linux Kali Linux affected
ArchLinux unaffected
NixOS affected and unaffected, slow to roll out updates

Early adopters would have caught the vulnerability independently of the Linux distribution (except ArchLinux 👏). Larger distributions can possibly afford more testing. Older distributions with older build systems are more likely to offer tarball support (essential for the xz backdoor) as back then git was not yet around. To avert such supply chain attacks, implementing supply-chain hardening (e.g. SLSA Level 3) consistently is certainly important and diversification of distributions or supply chains makes it harder first.

Comparison of Linux Distributions

In the comparison here, I focus on Debian/Ubuntu, Fedora/RHEL/AlmaLinux and openSUSE/SUSE, because they are beyond doubt battle tested with many users in corporate environments already. They are also commonly supported by third parties. Note that I don’t list criteria for which all distributions perform equally.

Criterion Debian/Ubuntu Fedora/RHEL AlmaLinux openSUSE/SUSE
bootc 🟨 not yet yes yes 🟨 not yet (but Kalpa unstable and Aeon RC with snapshots)
Flatpak app support yes yes ✅ yes yes
Flatpak apps from own sources ❌ no yes 🟨 not yet, but adaptable from Fedora ❌ no
FreeIPA server for user management ✅ yes ✅ yes ✅ yes no5
Proxmox server for VMs yes ❌ no ❌ no ❌ no
Foreman server for laptop management ✅ yes ✅ yes ✅ yes ❌ no6
Non-Profit Foundation ✅ yes (US 🇺🇸 and France 🇫🇷) ❌ no ✅ yes (US 🇺🇸) ❌ no
3rd-party download mirrors in the EU ca. 1507 ca. 1008 ca. 2009 ca. 5010
3rd-party download mirrors worldwide ca. 3507 ca. 3258 ca. 3509 ca. 12510
Github Topics per distribution name ca. 17150 (6344+10803) ca. 2500 (1,943+478) ca. 150 ca. 550 (362+172)
world-wide adopted (based on mirrors) ✅ yes ✅ yes ✅ yes 🟨 not as much
annual revenue of backing company ca. 300m$ ca. 4500m$ only donations ca. 700m$
employees world-wide of backing company ca. 1k ca. 20k 11 < 500 (including CloudLinux) ca. 2.5k12
employees in Europe of backing company ≤ 1k ca. 4.5k < 500 (including CloudLinux) ≤ 2.5k
SAP-supported13 🙄 ❌ no ✅ yes 🟨 RHEL-compatible ✅ yes

I find it extremely difficult to find reliable public numbers on employees, revenues and donations. I list here what I was able to find in the Internet, because I think it helps to quantify the popularity of the enterprise Linux distribution in corporate settings. Numbers for Debian are not very expressive due to the many companies othen than Ubuntu involved. Let me know if you find better numbers.

Other than company figures, also the number of search queries (Google Trends) given an impression on the popularity of Linux distributions. Find here below the graph for the community Linux distributions as of December 2025.

Google Trends for Debian, Fedora and openSUSE worldwide 2025

Google Trends for Debian, Fedora and openSUSE worldwide 2025 (Source)

Conclusions as of December 2025

Obviously, it is challenging to propose comprehensive criteria and relevant metrics to compare Linux distributions for corporate environments for their suitability as base distribution for a project like EU OS. This blog post does not replace a more thorough study. It offers however some interesting insights to inform possible next steps.

  1. Debian is a multi-stakeholder non-profit organisation with legal entities in several jurisdictions. Unfortunately, its bootc support is in an early stage only and it lacks support for some third-party software vendors such as SAP. For corporate environments, Debian does not offer alternatives to FreeIPA and Foreman, which work best for Fedora/RHEL/AlmaLinux, but also support Debian.
  2. Fedora is in this comparison the 2nd largest community in terms of mirrors and Github repositories. Fedora has no legal entity independent from its main sponsor RHEL. However, AlmaLinux is a multi-stakeholder non-profit organisation, albeit very US-centered. With RHEL front running Linux in enterprise deployments for several years, most use cases are covered, including building Flatpak apps from Fedora sources. Fedora downstream distributions with bootc (ublue, Bazzite, Kinoite) run already on tens of thousands systems including in the EU public sector.
  3. openSUSE has most success in German-speaking countries and the US (possibly driven by SAP). Internationally, it is significantly less popular. openSUSE has no legal entity independent from its main sponsor SUSE registered in Luxembourg and headquartered in Germany. For corporate environments, openSUSE does not offer alternatives to FreeIPA and Foreman, which support openSUSE only as clients. While Uyuni6 offers infrastructure/configuration management, it remains unclear if it can replace Foreman for managing fleets of laptops. openSUSE’s bootc support is in an early stage only.

No Linux distribution fulfills all the criteria. Independently of the distribution, corporate environments would rely on FreeIPA, Foreman, Keycloak, podman, systemd, etc. that Red Hat sponsors. Debian is promising, but its work to support bootc is not receiving much attention. AlmaLinux is promising, but would need to proof its independence from politics yet as it is a fairly new project (1st release in 2021) and doubts remain on its capacity to support Fedora (as Red Hat does) in the long run. Microsoft blogged this week about their increasing contributions to Fedora. Maybe European and non-European companies can step up likewise in 2026, so that Fedora can become a multi-stakeholder non-profit organisation similar to AlmaLinux today.

Community Talk at Fosdem

My 30 min talk on this topic has been accepted at the community conference fosdem 2026 in Brussels, Belgium! Please consider to join if you are at fosdem and let me know your thoughts and questions. The organisers have not yet allocated timeslots yet, but I believe it will take place on Saturday, 31st January 2026.

Talk title
EU OS: learnings from 1 year advocating for a common Desktop Linux for the public sector
Track title
Building Europe’s Public Digital Infrastructure

All the best,
Robert

  1. I know that the public sector relies on vendors that ship embedded Linux on WiFi routers, traffic lights, fleet of cars, etc. If anyway you identified a relevant use case that is missing here, please feel free to let me know and I will consider to add it here. ↩︎

  2. During testing for EU OS, I learnt that Red Hat upgraded the instruction set architecture (ISA) baseline to x86-64-v3 microarchitecture level in RHEL 10. Consequently, my old Thinkpad x220 is not supported any longer. While this may not be an issue for resourceful public sector organisations with recent laptops, it is an issue for less resourceful organisations, including many schools world-wide, but also in the EU. ↩︎

  3. I counted Source RPM packages with rpm -qa --qf '%{SOURCERPM}\n' | sort -u | wc -l in each given environment. ↩︎

  4. The public sector also wants to avoid vendor lock-in, which is just one specific form to ‘interfere with the execution of its public mandate’. ↩︎

  5. FreeIPA does not run on openSUSE, but supports openSUSE clients. Alternative software for openSUSE may be available. Community members suggest Kanidm, but is lacks features and development seems stalled. ↩︎

  6. Foreman runs only on Debian/Fedora/RHEL/AlmaLinux, but supports openSUSE clients. SUSE offers Rancher, which is limited to Kubernetes clusters. Uyuni and its enterprise-supported downstream SUSE Multi-Linux Manager offers configuration and infrastructure management based on SALT↩︎ ↩︎2

  7. https://www.debian.org/mirror/list ↩︎ ↩︎2

  8. https://mirrormanager.fedoraproject.org/mirrors?page_size=500&page_number=1 ↩︎ ↩︎2

  9. https://mirrors.almalinux.org ↩︎ ↩︎2

  10. https://mirrors.opensuse.org ↩︎ ↩︎2

  11. https://www.redhat.com/en/about/company-details ↩︎

  12. https://fortune.com/2024/07/26/suse-software-ceo-championing-open-source-drives-innovation-purpose/ ↩︎

  13. https://pages.community.sap.com/topics/linux/supported-platforms ↩︎

Image

home infra weekly recap: pre holidays 2025

Posted by Kevin Fenzi on 2025-12-20 23:22:31 UTC
Scrye into the crystal ball

Time for another weekly recap, but since I am on vacation for the holidays already, this one is things I've done at home. 🙂

There's often things I just don't have time for or energy for in my home infrastructure, so I add those things to a list and try and do those things over the holidays. Of course often I don't get to them all, or even most of them, but it's a nice list to look at for things to do.

This week:

== December of docs progress

I've kept up on my 'december of docs' plan. I've done a pull request and/or processed some docs tickets every day. When we moved the infra docs to pagure a number of years ago, we opened tickets on all our standard operating procedures to review and update them, so I have been slowly working thought them. So far about 30ish tickets closed and 20ish prs (some tickets I just closed because the sop was moot or didn't need any changes).

== iptables to nftables

I switched my home networks to use nftables directly. I just never got around to it and kept using iptables. The conversion was pretty simple with iptables-restore-translate / ip6tables-restore-translate. I also went though all my rules and dropped a bunch of old ones i no longer needed. You might wonder why I don't just move to firewalld? I could have, but my home network is a bit complicated and firewalld just seemed like overhead/complexity. I got everything working, then the next day I happened to reboot my main home server and... my wireguard tunnels wouldn't work. I couldn't see why the rules all looked fine. Finally I noticed that firewalld was starting and stepping all over my rules. It must have been enabled on install, but iptables started before it so it just failed, but nftables loaded later and it messed it up.

== frame work firmware day / fixing my media pc.

I have 4(!) framework motherboards. They were all downversion on firmware and my media pc (11th gen intel) stopped working for some reason.

The 11th gen intel board was the orig one I got with my first framework laptop several years ago now. When I upgraded to the 12th gen intel one, I moved this motherboard to a coolermaster external case and repurposed it for my media pc. Things worked for a while, but then I couldn't get it to boot, and because it was in the external case it was hard to tell why. So, I pulled it out and stuck it into a laptop case and it booted fine, but I realized the firmware was so old it didn't handle the "I am not in a laptop" mode very well at all. This one needed me to enable lvfs-testing and update firmware, then download and use a usb to upgrade it again. The first one was needed to in order to add support for upgrading firmware from the usb.

Next up was the 12th gen intel one I had gotten to replace the 11th gen. This one I also moved to a coolermaster case after upgrading to the ryzen board, and this one also wasn't booting. I swapped it into the laptop chassis and upgaded it to the latest firmware and then left it in that chassis/laptop.

The first rzyen one I got to replace the 12th gen intel one I decided to swap over to being the media center pc as it's faster/nicer. I got it updated in the laptop and swapped it into the coolermaster case, but then...it wouldn't boot. Red and Blue flashing lights and no booting. Poking around on the net I found that you can get past this by pressing and holding the case open switch 10 times in a row. Indeed this worked to get it booting up. It's still a bit anoying though because the ryzen board has a slightly different layout around the power switch and the coolermaster case doesn't work quite right on those boards like it does on the intel ones. I did manage to get it booting, but the power switch could use a bit of rework to avoid this problem. ;(

The last board is in my 'hot spare' laptop and was already up to date on firmware. Thanks lvfs!

== Some fun with 'health connect'

I played around with health connect on my grapheneos phone. It notified me that I could connect devices to it. I am not sure if this support is new or I just never noticed it before now.

My understanding of the way this works is that you can approve specific sensors to write data and applications that have permission to read that data. Everything stays on your phone unless you approve some application that syncs it off elsewhere.

In my case I enabled the 'number of steps' sensor (which currently is the only thing I have to write data into health connect) and then enabled the android home assistant app to read it. So, I now have a nice home assistant sensor that lets me see how many steps I walked each day. Kinda nice to have the historical data in home assistant.

I'm looking into getting a CGM (continious glucose monitor) sensor, and that I could also share with home assistant to keep historical data.

I'm a bit surprised that this setup is so reasonable.

comments? additions? reactions?

As always, comment on mastodon: https://fosstodon.org/@nirik/115754591222815006

Image

🛡️ PHP version 8.1.34, 8.2.30, 8.3.29, 8.4.16, and 8.5.1

Posted by Remi Collet on 2025-12-19 09:39:00 UTC

RPMs of PHP version 8.5.1 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.4.16 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.29 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.2.30 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.1.34 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ These versions are also available as Software Collections in the remi-safe repository.

ℹ️ The packages are available for x86_64 and aarch64.

🛡️ These Versions fix 4 security bugs (CVE-2025-14177, CVE-2025-14178, CVE-2025-14180), so the update is strongly recommended.

Version announcements:

ℹ️ Installation: Use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.5 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.5/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection

yum install php85

Replacement of default PHP by version 8.4 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.4/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.4
dnf update

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

On Enterprise Linux (dnf 4)

dnf module switch-to php:remi-8.3/common

On Fedora (dnf 5)

dnf module reset php
dnf module enable php:remi-8.3
dnf update

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.1
  • EL-9 RPMs are built using RHEL-9.7
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.9 on x86_64 and aarch64
  • A lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Image

Image

Image

Image

Image

Software Collections (php83 / php84 / php85)

Image

Image

Image

Image

Image

Image

Friday Links 25-29

Posted by Christof Damian on 2025-12-19 12:33:00 UTC

A white city model in the middle of a dark room illuminated from above
Clearly, I have been slacking for the last weeks.

Just look at all the links from other people at the bottom! 

This might also be the last post for the year, as I will be enjoying the festive season. 

Nonetheless, a lot of great content this week. I discovered a few new blogs that will keep me busy for the future.    

Leadership

Are women less convincing or perceivers biased? Understanding differential reactions towards men and women’s intentions to exert influence  [Paper] - No.  "In the context of our study, we thus do not find evidence that women and men differ in their ability to exert influence, or that others are biased towards women when evaluating their speeches."

Futurespectives: learning from failures that haven’t happened yet - nice idea, similar to pre-mortems. 

Stop Looking for Silver Bullets and Start Looking at Your Context - I must confess I sometimes still fall for them.  

Most Technical Problems Are Really People Problems  - 100% agree

The Mathematics of the Christmas Rush: Why ‘One Last Push’ Guarantees You’ll Be Late - yep. 

40 questions to ask yourself every year - I'll try that. 

Engineering

Turbo Vision -  A modern port of Turbo Vision 2.0

Automatically merging dependabot PRs - I quite like to do that manually, but with many projects, this is probably a good idea.

AIs Exploiting Smart Contracts - I never liked them either. 

Linux kernel version numbers - short version: everything is stable and higher is newer 

Strategic re-architecture, moving beyond the “black hole” fear - this makes a lot of assumptions about the team :-) 

It’s The End Of Observability As We Know It (And I Feel Fine) - frankly, I would appreciate any observability. 

Programming peaked - maybe, or perhaps JavaScript peaked. 

Level 9 code archive is now open source - if you want to look at some ancient code

Urbanism

How London Built A Utopia [YouTube] - The Barbican 

Spain to launch €60 monthly public transport ticket for buses, Rodalies, and medium-distance rail network - Spanish Deutschlandticket! 

You Need a Train to Get to this Hotel [YouTube] - this is a bit specific

Almost 2,000 homeless people in Barcelona, 43% increase from 2023 - there is enough space for them. Instead, we are evicting more. 

AI vs. Human Drivers - we don't know yet if it is good or bad, probably bad. 

The Data on Self-Driving Cars Is Clear. We Have to Change Course. - another angle. 

Bad Cyclists? or… Bad Infrastructure?  [YouTube] - nice to see that the channel is back. Some of the infrastructure in Barcelona is a mess, and this crossing is pretty new. 

Access City Award - Spain is well represented! 

Car Culture, Part 1: The Battle for Disneyland - Autopia going electric. 

Mapping Diversity - embarrasing. 

Random Art

Latest painting (acrylic on canvas) - Lee Madgwick's latest

Forget the far right. The kids want a 'United States of Europe.' - I am not sure if these kids are actually alright. 

The original Mozilla "Dinosaur" logo artwork - I love the design, now I can also follow the artist. 

Kevin McCloud: ‘We measure the value of a home by the number of toilets it has – which is bonkers’ - toilets, guestrooms, garages, …

RIP John Varley - more for my reading list 

Size of Life - pretty visualisation - annoyingly not completely ordered by size 

Energy Predictions 2025 - solar and batteries will win

The Pulse: Could a 5-day RTO be around the corner for Big Tech? - thankfully, there isn't just big tech. 

Mythic Maps - more stylised Strava maps. I actually like these. 

After the Bubble - "there will be a crash and a hangover" 

Readers reply: What are the greatest life lessons? - "This, too, shall pass. Leoned"

A Remarkable Assertion from A16Z - by Neal Stephenson 

Trees Are So Weird  [YouTube] - WTF?! 

Other Links

I am away for two weeks and look what happens!  

Friday Links Disclaimer
Inclusion of links does not imply that I agree with the content of linked articles or podcasts. I am just interested in all kinds of perspectives. If you follow the link posts over time, you might notice common themes, though.
More about the links in a separate post: About Friday Links.
Image

Community Update – Week 51 2025

Posted by Fedora Community Blog on 2025-12-19 10:00:00 UTC

This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infratructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.

Week: 15 – 19 December 2025

Forgejo

This team is working on deployment of forge.fedoraproject.org.
Ticket tracker

Fedora Infrastructure

This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker

  • DC Move (rdu-cc to rdu3/rdu3-iso) outage completed.
    • Some hosts still having issues, see ticket
  • Zabbix moving forward as our main monitoring tool.
  • Updates+uptimes tool now removes hosts as they are removed from  ansible inventory.
  • Dealt with HW warranties that need/not-need renewing for next year.
  • Docs are now migrated to https://forge.fedoraproject.org/infra/docs

CentOS Infra including CentOS CI

This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker

Release Engineering

This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

  • Fedora 41 is now END OF LIFE.

QE

This team is working on day to day business regarding Fedora CI and testing.

If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.

The post Community Update – Week 51 2025 appeared first on Fedora Community Blog.

Image

🎲 PHP on the road to the 8.5.0 release

Posted by Remi Collet on 2025-09-26 05:04:00 UTC

Version 8.5.0 Release Candidate 1 is released. It's now enter the stabilisation phase for the developers, and the test phase for the users.

RPMs are available in the php:remi-8.5 stream for Fedora ≥ 41 and  Enterprise Linux 8 (RHEL, CentOS, Alma, Rocky...) and as Software Collection in the remi-safe repository (or remi for Fedora)

 

⚠️ The repository provides development versions which are not suitable for production usage.

Also read: PHP 8.5 as Software Collection

ℹ️ Installation : follow the Wizard instructions.

Replacement of default PHP by version 8.5 installation, module way (simplest way):

Using dnf 4 on Enterprise Linux

dnf module switch-to php:remi-8.5/common

Using dnf 5 on Fedora

dnf module reset php
dnf module enable php:remi-8.5
dnf update

Parallel installation of version 8.5 as Software Collection (recommended for tests):

dnf install php85

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL9 rpm are build using RHEL-9.6
  • EL8 rpm are build using RHEL-8.10
  • lot of extensions are also available, see the PHP extension RPM status page and PHP version 8.5 tracker
  • follow the comments on this page for update until final version
  • proposed as a Fedora 44 change

ℹ️ Information, read:

Base packages (php)
Image

Software Collections (php84)
Image

Image

⚙️ PHP version 8.3.28 and 8.4.15

Posted by Remi Collet on 2025-11-20 14:21:00 UTC

RPMs of PHP version 8.4.15 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

RPMs of PHP version 8.3.28 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).

ℹ️ The packages are available for x86_64 and aarch64.

ℹ️ There is no security fix this month, so no update for versions 8.1.33 and 8.2.29.

These versions are also available as Software Collections in the remi-safe repository.

⚠️ These versions introduce a regression in MySQL connection when using an IPv6 address enclosed in square brackets. See the report #20528. A fix is under review and will be released soon.

Version announcements:

ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.4 installation (simplest):

dnf module switch-to php:remi-8.4/common

Parallel installation of version 8.4 as Software Collection

yum install php84

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

Parallel installation of version 8.3 as Software Collection

yum install php83

And soon in the official updates:

⚠️ To be noticed :

  • EL-10 RPMs are built using RHEL-10.0 (next build will use 10.1)
  • EL-9 RPMs are built using RHEL-9.6 (next build will use 9.7)
  • EL-8 RPMs are built using RHEL-8.10
  • intl extension now uses libicu74 (version 74.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.10, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 23.8 on x86_64 and aarch64
  • a lot of extensions are also available; see the PHP extensions RPM status (from PECL and other sources) page

ℹ️ Information:

Base packages (php)

Image

Image

Software Collections (php83 / php84)

Image

Image

Image

💎 PHP version 8.5 is released!

Posted by Remi Collet on 2025-11-21 07:52:00 UTC

RC5 was GOLD, so version 8.5.0 GA was just released, at the planned date.

A great thanks to Volker Dusch, Daniel Scherzer and Pierrick Charron, our Release Managers, to all developers who have contributed to this new, long-awaited version of PHP, and to all testers of the RC versions who have allowed us to deliver a good-quality version.

RPMs are available in the php:remi-8.5 module for Fedora and Enterprise Linux ≥ 8 and as Software Collection in the remi-safe repository.

Read the PHP 8.5.0 Release Announcement and its Addendum for new features and detailed description.

For memory, this is the result of 6 months of work for me to provide these packages, starting in July for Software Collections of alpha versions, in September for module streams of RC versions, and also a lot of work on extensions to provide a mostly full PHP 8.5 stack.

emblem-notice-24.pngInstallation: read the Repository configuration and choose installation mode, or follow the Configuration Wizard instructions.

Replacement of default PHP by version 8.5 installation (simplest):

Fedora (dnf 5):

dnf install https://rpms.remirepo.net/enterprise/remi-release-$(rpm -E %fedora).rpm
dnf module reset php
dnf module enable php:remi-8.5
dnf install php

Enterprise Linux (dnf 4):

dnf install https://rpms.remirepo.net/enterprise/remi-release-$(rpm -E %rhel).rpm
dnf module switch-to php:remi-8.5/common

Parallel installation of version 8.5 as Software Collection (recommended for tests):

yum install php85

emblem-important-2-24.pngTo be noticed :

  • EL-10 RPMs are built using RHEL-10.0
  • EL-9 RPMs are built using RHEL-9.6
  • EL-8 RPMs are built using RHEL-8.10
  • This version will also be the default version in Fedora 44
  • Many extensions are already available, see the PECL extension RPM status page.

emblem-notice-24.pngInformation, read:

Base packages (php)
Image

Software Collections (php85)
Image

Image

Blank lock screen in Hyprland

Posted by Major Hayden on 2025-12-18 00:00:00 UTC

I moved over to Hyprland as my primary desktop environment several months ago after wrestling with some other Wayland desktop environments. It does plenty of things well and finally allowed me to do screen sharing during meetings without much hassle.

A couple of small utilities, hyperidle and hyprlock, handle idle time and locking the screen when I step away from my desk. However, I kept coming back after lunch and found that both of my displays were often unresponsive with a blank screen after unlocking.

Diagnosing the issue #

I ran into this issue when I’d come back from lunch, hit the spacebar, and both monitors remained in power save mode. The power lights on both monitors were blinking, indicating that they were still in a low-power state.

If I turned off each monitor and turned it back on, the displays would come back on about 80% of the time. Power cycling the displays was annoying and it became more annoying when I found that my workspaces had migrated between the monitors. Nothing was in the right place any longer! 😭

I finally got in a situation where one monitor powered up and the other was still off! Time to run some diagnostic commands!

Digging in #

You can list the monitors in hyprland with hyprctl monitors all. Narrow that down by specifically looking for the DPMS (Display Power Management Signaling) status with this command:

$ hyprctl monitors | grep -E "(Monitor|dpms|disabled)")

Monitor DP-1 (ID 0):
	dpmsStatus: 1
	disabled: false
Monitor DP-2 (ID 1):
	dpmsStatus: 1
	disabled: false

In my case, the DPMS status for both monitors was 1, which means both monitors are on. Neither monitor is disabled. However, the monitor connected to DP-1 was still blank!

Even ddcutil said the same thing:

$ ddcutil detect

Display 1 
 I2C bus: /dev/i2c-9
 DRM_connector: card1-DP-1
 EDID synopsis:
 Mfg id: DEL - Dell Inc.
 Model: DELL U2723QE
 Product code: 17016 (0x4278)
 Serial number: 85P0F34
 Binary serial number: 1128482124 (0x4343454c)
 Manufacture year: 2024, Week: 38
 VCP version: 2.1

Display 2
 I2C bus: /dev/i2c-10
 DRM_connector: card1-DP-2
 EDID synopsis:
 Mfg id: DEL - Dell Inc.
 Model: DELL U2723QE
 Product code: 17016 (0x4278)
 Serial number: 55P0F34
 Binary serial number: 1128481356 (0x4343424c)
 Manufacture year: 2024, Week: 38
 VCP version: 2.1

Then I wondered if I could just cycle the DPMS and bring them both back:

$ hyprctl dispatch dpms off; sleep 1; hyprctl dispatch dpms on

Both monitors turned on and displayed my desktop! But why?

Could it be amdgpu? #

Checking the system journal with journalctl revealed an interesting message:

kernel: amdgpu 0000:03:00.0: [drm] REG_WAIT timeout 1us * 100 tries - dcn32_program_compbuf_size line:139

This suggests there’s some kind of a drm timeout when the AMD GPU driver is trying to do something.

The Arch Linux wiki suggests that disabling AMD’s low power state, GFXOFF, might help with similar issues. You can set a kernel parameter such as amdgpu.ppfeaturemask=0xfffd7fff to disable it. I’ve had bad luck in the past with these amdgpu parameters, so I wanted a workaround for now until I could test it more.

A (sorta) elegant workaround #

Hyprland has a key binding system that allows you to execute certain key combinations even when the screen is locked. I was already using some of these key bindings so that I could adjust my music even with the screen locked1:

> grep bindl ~/.config/hypr/hyprland.conf

bindl = , XF86AudioNext, exec, playerctl next
bindl = , XF86AudioPause, exec, playerctl play-pause
bindl = , XF86AudioPlay, exec, playerctl play-pause
bindl = , XF86AudioPrev, exec, playerctl previous

The normal key bindings in hyprland use bind, but bindl works even when the screen is locked. Here’s what I added:

> grep bindl ~/.config/hypr/hyprland.conf

bindl = , XF86AudioNext, exec, playerctl next
bindl = , XF86AudioPause, exec, playerctl play-pause
bindl = , XF86AudioPlay, exec, playerctl play-pause
bindl = , XF86AudioPrev, exec, playerctl previous
bindl = $mainMod SHIFT, D, exec, hyprctl dispatch dpms off && sleep 1 && hyprctl dispatch dpms on

Now I can hold down Mod + Shift + D when I return to my desk after lunch and both monitors come back on instantly!

I’ll let you know if I get around to messing with amdgpu.ppfeaturemask to see if that resolves the underlying issue. 🤓


  1. This was a family request after I went for a run and left some slightly-too-aggressive music playing by accident. 😅 ↩︎

Image

Invalid bug reports are sometimes documentation bugs

Posted by Ben Cotton on 2025-12-17 21:24:41 UTC

Most open source maintainers know the pain of dealing with invalid bugs. These are bugs that are already listed as known issues, that are intended behaviors, that aren’t reproducible, unsupported versions, or any number of other explanations. They waste time on the maintainer side in the triage, investigation, and response. And they waste submitter time, too. Everyone loses. While it’s frustrating to deal with invalid bug reports, almost no one files them on purpose.

Researchers (including Muhammad Laiq et al) have investigated invalid bug reports. One of the recommendations is to improve system documentation. This makes perfect sense. When there’s a difference between the expected and actual behavior of software, that’s a software bug. When there’s a difference between the user-expected behavior and the developer-expected behavior, that’s a documentation bug.

There will always be some people who don’t read the documentation. But those who do will file better bugs if your documentation is accurate, easy to find, and understandable. As you notice patterns in invalid bug reports, look for places to improve your documentation. Just like the dirt trails through a grassy area can tell you where the sidewalks should have been, the invalid bugs can show you where your documentation needs to get better. (Note that this applies to process documentation as well as software documentation.

As with all interactions in your project, a little bit of grace goes a long way. It’s frustrating to deal with invalid bug reports, but keep in mind that the person who filed it is trying to help make your project better. And often their bug report represents a real bug — just not the one they think.

This post’s featured photo by Neringa Hünnefeld on Unsplash.

The post Invalid bug reports are sometimes documentation bugs appeared first on Duck Alignment Academy.

Image

21 years of blogging

Posted by Jonathan McDowell on 2025-12-17 17:06:00 UTC

21 years ago today I wrote my first blog post. Did I think I’d still be writing all this time later? I’ve no idea to be honest. I’ve always had the impression my readership is small, and people who mostly know me in some manner, and I post to let them know what I’m up to in more detail than snippets of IRC conversation can capture. Or I write to make notes for myself (I frequently refer back to things I’ve documented here). I write less about my personal life than I used to, but I still occasionally feel the need to mark some event.

From a software PoV I started out with Blosxom, migrated to MovableType in 2008, ditched that, when the Open Source variant disappeared, for Jekyll in 2015 (when I also started putting it all in git). And have stuck there since. The static generator format works well for me, and I outsource comments to Disqus - I don’t get a lot, I can’t be bothered with the effort of trying to protect against spammers, and folk who don’t want to use it can easily email or poke me on the Fediverse. If I ever feel the need to move from Jekyll I’ll probably take a look at Hugo, but thankfully at present there’s no push factor to switch.

It’s interesting to look at my writing patterns over time. I obviously started keen, and peaked with 81 posts in 2006 (I’ve no idea how on earth that happened), while 2013 had only 2. Generally I write less when I’m busy, or stressed, or unhappy, so it’s kinda interesting to see how that lines up with various life events.

Blog posts over time

During that period I’ve lived in 10 different places (well, 10 different houses/flats, I think it’s only 6 different towns/cities), on 2 different continents, working at 6 different employers, as well as a period where I was doing my Masters in law. I’ve travelled around the world, made new friends, lost contact with folk, started a family. In short, I have lived, even if lots of it hasn’t made it to these pages.

At this point, do I see myself stopping? No, not really. I plan to still be around, like Flameeyes, to the end. Even if my posts are unlikely to hit the frequency from back when I started out.

Image

Using OpenSearch data streams in syslog-ng

Posted by Peter Czanik on 2025-12-17 13:01:57 UTC

Recently, one of our power users contributed OpenSearch data streams support to syslog-ng, which reminded me to also do some minimal testing on the latest OpenSearch release with syslog-ng. TL;DR: both worked just fine.

Read more at https://www.syslog-ng.com/community/b/blog/posts/using-opensearch-data-streams-in-syslog-ng

Image

syslog-ng logo

Image

Building Bridges: Microsoft’s Participation in the Fedora Linux Community

Posted by Brian (bex) Exelbierd on 2025-12-17 09:30:00 UTC

While I was at Flock 2025, I had the opportunity to share what Microsoft has been contributing to Fedora over the last year. I finally got a blog post written for the Microsoft Tech Community Linux and Open Source Blog.

Read the full blog over at the Microsoft Tech Community where this was originally posted.

Image

F43 FESCo Elections: Interview with Máirín Duffy (duffy/mizmo)

Posted by Fedora Community Blog on 2025-12-17 08:06:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Máirín Duffy

  • FAS ID: duffy
  • Matrix Rooms: My long-term home has been Fedora Design, but I also hang out in Podman, Fedora Marketing, and Fedora AI/ML.

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I have used Fedora as my daily driver since 2003 and have actively contributed to Fedora since 2004. (Example: I designed the current Fedora logo and website design.) I am very passionate about the open source approach to technology. I first started using Linux as a high school student (my first Linux was Red Hat 5.1) and being able to use free software tools like Gimp when I couldn’t afford Photoshop made an outsized impact on my life. (I explain my background in Linux and open source in-depth in this interview with Malcolm Gladwell: https://youtu.be/SkXgG6ksKTA?si=RMXNzyzH9Tr6AuwN )

Technology has an increasingly large impact over society. We should have agency over the technology that impacts our lives. Open source is how we provide that agency. We’re now in a time period with a new disruptive technology (generative AI) that – regardless if you think it is real or not, is having real impact on computing. Fedora and other open source projects need to be able to provide the benefits of this new technology, the open source way and using open source software. Small, local models that are easy for our users to deploy on their own systems using open source tooling will provide them the ability to benefit AI’s strengths without having to sacrifice the privacy of their data.

There is a lot of hype around AI, and a lot of very legitimate concerns around its usage including the intellectual property concerns of the pre-trained data, not having enough visibility into what data is part of pre-trained data sets, the working conditions under which some of the data is labeled under, the environmental impact of the training process, the ethics of its usage. Open source projects in particular are getting pummeled by scraping bots hungry to feed coding models. There are folks in the tech industry who share these legitimate concerns that prefer to avoid AI and hope that it the bubble will just pop and it will go away. This strategy carries significant risks, however, and we need a more proactive approach. The technology has legitimate uses and the hype is masking them. When the hype dies down, and the real value of this new technology is more visible, it will be important for the type of community members we have in Fedora with their commitment to open source principles and genuinely helping people to have had a seat at the table to shape this technology.

(You can see a short video where I talk a bit more indepth about the pragmatic, privacy and open source-focused approach I take to AI here: https://youtu.be/oZ7EflyAPUw?si=HSbNhq_3NelXoX2J)

In the past I have been quite skeptical about generative AI and worried about its implications for open source. (I continue to be skeptical and annoyed by the hype surrounding it.) I’ve spent the past couple of years looking at open source licensed models and building open source generative AI tooling – getting hands on, deep experience to understand it – and as a result I have seen first hand the parts of this technology that have real value. I want FESCo to be able to make informed decisions when AI issues come up.

My background is in user experience engineering, and I am so excited about what this technology will mean for improving usability and accessibility for users of open source software. For example, we never have enough funding or interest to solve serious a11y problems; now we could generate text summaries of images & describe the screen out loud with high-quality audio from text-to-voice models for low vision users! I want open source to benefit from these and even more possibilities to reach and help more people so they can enjoy software freedom as well.

I have served in multiple governance roles in Fedora including time on the Fedora Council, the Mindshare Committee, lead of various Fedora Outreachy rounds (I have mentored dozens of interns in Fedora), and founder / lead of the Design team over many years. More importantly, I have deep Linux OS expertise, I have deep expertise in user experience, and I have a depth in AI technology to offer to FESCo. I believe my background and skills will enable FESCo to make responsible decisions in the best interest of open source and user agency, particularly around the usage of AI in Fedora and in the Fedora community. We will absolutely need to make decisions as a governing group in the AI space, and they should be informed by that specific expertise.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I founded and ran the Fedora Design Team for 17 years. It was the first major Linux distribution community-lead design team, and often as a team we’ve been asked by other distros and open source projects for help (so we expanded to call ourselves the “Community Design Team.”) Over the years I’ve designed the user experience and user interfaces for many components in Fedora including our background wallpapers, anaconda, virt-manager, the GNOME font-chooser, and a bunch of other stuff. I moved on from the Fedora Design role to lead design for Podman Desktop and to work more with the Podman team (who are also part of the Fedora community) for a couple of years, and I also led the InstructLab open source LLM fine-tuning project and corresponding Linux product from Red Hat (RHEL AI.) For the past year or so I have returned to working on core Linux on the Red Hat Enterprise Linux Lightspeed team, and my focus is on building AI enhancements to the Linux user experience. My team is part of the Fedora AI/ML SIG and we’re working on packaging user-facing components and tooling for AI/ML for Fedora, so folks who would like to work with LLMs can do so and the libraries and tools they need will be available. This includes building and packaging the linux-mcp-server and packaging goose, a popular open source AI agent, and all of their dependencies.

My career has focused on benefiting Fedora users by improving the user experience of using open source technology, and being collaborative and inclusive while doing so.

How do you handle disagreements when working as part of a team?

Data is the best way to handle disagreements when working as part of a team. Opinions are wonderful and everyone has them, but decisions are based made with real data. Qualitative data is just as important as quantitative data, by the way. That can be gathered by talking directly to the people most impacted by the decision (not necessarily those who are loudest about it) and learning their perspective. Then informing the decision at hand with that perspective.

A methodology I like to follow in the face of disagreements is “disagree and let’s see.” (This was coined by Molly Graham, a leadership expert.) A decision has to be made, so let’s treat it like an experiment. I’ll agree to run an experiment, and track the results (“let’s see”) and advocate for a pivot if it turns out that the results point to another way (and quickly.) Being responsible to track the decision and its outcomes and bringing it back to the table, over time, helps build trust in teams like FESCo so folks who disagree know that if the decision ended up being the wrong one, that it can and will be revisited based on actual outcomes.

Another framework I like to use in disagreements is called 10-10-10, created by Suzy Welch. It involves thinking through: how will this decision matter in 10 minutes? How about 10 months? How about 10 years? This frame of thought can diffuse some of the chargedness of disagreement when all of the involved people realize the short or long term nature of the issue together at the same time.

Acknowledging legitimate concerns and facing them head on instead of questioning or sidelining others’ lived experience and sincerely-held beliefs and perspectives is also incredibly important. Listening and building bridges between community members with different perspectives, and aligning them to the overall projects goals – which we all have in common as we work in this community – is really helpful to help folks look above the fray and be a little more open-minded.

What else should community members know about you or your positions?

I understand there is a campaign against my running for FESCo because myself and a colleague wrote an article that walked through real, undoctored debugging sessions with a locally-hosted, open source model in order to demonstrate the linux-mcp-server project.

I want to make it clear that I believe any AI enhancements that are considered for Fedora need a simple opt-in button, and no AI-based solutions should be the default. (I’ve spoken about this before, recently on the Destination Linux Podcast: https://youtu.be/EJZkJi8qF-M?t=3020) The user base of Fedora and other open source operating systems come to their usage in part due to wanting agency over the technology they use and having ownership and control over their data. The privacy-focused aspects of Fedora have spanned the project’s existence and that must be respected. We cannot ignore AI completely, but we must engage with it thoughtfully and in a way that is respectful of our contributors and user base.

To that end, should you elect to grant me the privilege of a seat to FESCo this term:

  • I intend to vote in opposition to proposals that involve bundling proprietary model weights in Fedora.
  • I intend to vote in opposition to proposals that involve sending Fedora user data to third party AI services.
  • I intend to vote in opposition to proposals to turn AI-powered features on by default in any Fedora release.
  • I intend to vote in favor of proposals to enact AI scraper mitigation strategies and to partner with other open source projects to fight this nuisance.

My core software engineering background is in user experience and usability, and I believe in the potential of small, local models to improve our experience with software without compromising our privacy and agency. I welcome ongoing community input on these principles and other boundaries you’d like to see around emerging technologies in Fedora.

The post F43 FESCo Elections: Interview with Máirín Duffy (duffy/mizmo) appeared first on Fedora Community Blog.

Image

F43 FESCo Elections: Interview with Timothée Ravier (siosm/travier)

Posted by Fedora Community Blog on 2025-12-17 08:05:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Timothée Ravier

  • FAS ID: siosm
  • Matrix Rooms: Fedora Atomic Desktops, Fedora CoreOS, Fedora bootc, Fedora KDE, Fedora Kinoite, Fedora Silverblue

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I want to be a member of FESCo to represent the interests of users, developers and maintainers of what we call Atomic, Bootable Container, Image Based or Immutable variants of Fedora (CoreOS, Atomic Desktops, IoT, bootc, etc.).

I think that what we can build around those variants of Fedora is the best path forward for broader adoption of Fedora and Linux in the general public and not just in developer circles.

I thus want to push for better consideration of the challenges specific to Atomic systems in all parts of Fedora: change process, infrastructure, release engineering, etc.

I also want to act as a bridge with other important communities built around this ecosystem such as Flathub, downstream projects such as Universal Blue, Bazzite, Bluefin, Aurora, and other distributions such as Flatcar Linux, GNOME OS, KDE Linux, openSUSE MicroOS, Aeon or ParticleOS.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I primarily contribute to Fedora as a maintainer for the Fedora Atomic Desktops and Fedora CoreOS. I am also part of the KDE SIG and involved in the Bootable Containers (bootc) initiative.

My contributions are focused on making sure that those systems become the most reliable platform for users, developers and contributors. This includes both day to day maintenance work, development such as enabling safe bootloader updates or automatic system updates and coordination of changes across Fedora (switching to zstd compressed initrds as an example).

While my focus is on the Atomic variants of Fedora, I also make sure that the improvements I work on benefit the entire Fedora project as much as possible.

I’ve listed the Fedora Changes I contributed to on my Wiki profile: https://fedoraproject.org/wiki/User:Siosm.

How do you handle disagreements when working as part of a team?

Disagreements are a normal part of the course of a discussion. It’s important to give the time to everyone involved to express their positions and share their context. Limiting the scope of a change or splitting it into multiple phases may also help.

Reaching a consensus should always be the preferred route but sometimes this does not happen organically. Thus we have to be careful to not let disagreements linger on unresolved and a vote is often needed to reach a final decision. Not everyone may agree with the outcome of the vote but it’s OK, we respect it and move on.

Most decisions are not set in stone indefinitely and it’s possible to revisit one if the circumstances changed. A change being denied at one point may be accepted later when improved or clarified.

This is mostly how the current Fedora Change process works and I think it’s one of the strength of the Fedora community.

What else should community members know about you or your positions?

I’ve been a long time Fedora user. I started contributing more around 2018 and joined Red Hat in 2020 where I’ve been working on systems such as Fedora CoreOS and RHEL CoreOS as part of OpenShift. I am also part of other open source communities such as Flathub and KDE and I am committed to the upstream first, open source and community decided principles.

The post F43 FESCo Elections: Interview with Timothée Ravier (siosm/travier) appeared first on Fedora Community Blog.

Image

F43 FESCo Elections: Interview with Daniel Mellado (dmellado)

Posted by Fedora Community Blog on 2025-12-17 08:04:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Daniel Mellado

  • FAS ID: dmellado
  • Matrix Rooms: #ebpf, #fedora-devel, #rust, #fedora-releng, and a lot of #fedora-* 😉

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I accepted this nomination because I believe FESCo would benefit from fresh perspectives, and I think that these new perspectives will also help to lower the entrance barriers for Fedora.

Governance bodies stay healthy when they welcome new voices alongside experienced members, and I want to be part of that renewal.

Technologies like eBPF are redefining what’s possible in Linux–observability, security, networking–but they also bring packaging challenges that we haven’t fully solved, such as kernel version dependencies, CO-RE relocations, BTF requirements, and SELinux implications.

On FESCo, I want to help Fedora stay ahead of these challenges rather than merely reacting to them. I want to advocate for tooling and guidelines that will help make complex kernel-dependent software easier to package.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I founded and currently lead the Fedora eBPF Special Interest Group. Our goal is to make eBPF a first-class citizen in Fedora, improving the experience for the developers who are building observability, security, and networking tools and figuring out how to package software that has deep kernel dependencies.

On the packaging side, I maintain bpfman (an eBPF program manager) and several Rust crates that support eBPF and container tooling. I’ve also learned the hard way that Rust dependency vendoring is… an adventure. 😅

Before Fedora, I spent years in the OpenStack community. I served as PTL (Project Team Lead) for the Kuryr project, the bridge between container and OpenStack networking and was active in the Kubernetes SIG. That experience taught me a lot about running open source projects: building consensus across companies, mentoring contributors, managing release cycles, and navigating the politics of large upstream communities.

I try to bring that same upstream, community-first mindset to Fedora. My hope is that the patterns we establish in the eBPF SIG become useful templates for other packagers facing similar challenges.

How do you handle disagreements when working as part of a team?

I start by assuming good intent. If someone is in the discussion, it’s because they do also care about the outcome, even though they may have another point of view.

I also try not to speculate about why someone holds a particular view. Assigning motives derails technical conversations fast. Instead, I focus on keeping things facts-driven: what does the code actually do, what do users need, what are the real constraints? Egos don’t ship software, and sticking to concrete data keeps discussions productive.

When disagreements persist, I find it helps to identify what everyone does agree on and use that as a new starting point. You’d be surprised how often this unblocks a stalled conversation.

Also, I think that it’s important to step back. It’s tempting to want the final word, but that can drag things on forever without real progress. Miscommunication happens and not every discussion needs a winner.

What else should community members know about you or your positions?

I believe in Fedora’s Four Foundations: Freedom, Friends, Features, First. What draws me to this community is the “Friends” part: there’s a place in Fedora for anyone who wants to help, regardless of background or technical skill level. Open source is at its best when it’s genuinely welcoming, and I want FESCo to reflect that.

From my time in the OpenStack community, I learned that healthy projects focus on protecting, empowering, and promoting: protecting the open development process and the values that make the community work; empowering contributors to do great work without painful barriers; and promoting not just the software, but the people who build and use it. I try to bring that mindset to everything I do.

I also believe strongly in working upstream. The changes we make should benefit not just Fedora users, but the broader open source ecosystem. When we solve a hard problem here, that knowledge should flow back to upstream projects and other distributions.

I’ll be at FOSDEM 2026. FOSDEM embodies what I love about open source: a non-commercial space where communities meet to share knowledge freely. If you’re there, come say hi.

The post F43 FESCo Elections: Interview with Daniel Mellado (dmellado) appeared first on Fedora Community Blog.

Image

F43 FESCo Elections: Interview with Kevin Fenzi (kevin/nirik)

Posted by Fedora Community Blog on 2025-12-17 08:03:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Kevin Fenzi

  • FAS ID: kevin
  • Matrix Rooms: I’m probibly most active in the following rooms. I’m available and answer notifications and watch many other channels as well, but those 3 are the most active for me:
    • noc -> day to day infra stuff, handling alerts, talking with other infra folks
    • admin -> answering questions, helping fix issues, some team discussions
    • releng -> release engineering team discussions, answering questions, handling issues, etc.

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I think I still provide useful historical information as well as being able to pull on that long history to know when things are good/bad/have been tried before and have lessons to teach us.

Based on the proposals we approve or reject we can steer things from FESCo. I do think we should be deliberate, try and reach consensus and accept any input we can get to try to come to good decisions. Sometimes things won’t work out that way, but it should really be the exception instead of the rule.

How do you currently contribute to Fedora? How does that contribution benefit the community?

I’m lucky to be paid by Red Hat to work on infrastucture, I like to hope it’s useful to the community
In my spare time I work on packages, answering questions where I can, unblocking people, release engineering work, matrix and lists moderation.

I really hope my contributions contribute to a happier and more productive community.

How do you handle disagreements when working as part of a team?

I try and reach consensus where possible. Sometimes that means taking more time or involving more people, but If it can be reached I think it’s really the best way to go.

Sometimes of course you cannot reach a consensus and someone has to make a call. If thats something I am heavily involved in/in charge of, I do so. I’m happy that we have a council as a override of last resort in case folks want to appeal some particularly acromonious decision. Also, as part of a team you have to sometimes delegate something to someone and trust their judgement in how it’s done.

What else should community members know about you or your positions?

I think there’s been a number of big debates recently and probibly more to come. We need to remember we are all friends and try and see things from other people’s point of view.

My hero these days seems to be treebeard: “Don’t be hasty”

My matrix/email is always open for questions from anyone…

The post F43 FESCo Elections: Interview with Kevin Fenzi (kevin/nirik) appeared first on Fedora Community Blog.

Image

F43 FESCo Elections: Interview with Fabio Alessandro Locati (fale)

Posted by Fedora Community Blog on 2025-12-17 08:02:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Fabio Alessandro Locati

  • FAS ID: fale
  • Matrix Rooms: I can be easily found in #atomic-desktops:fedoraproject.org, #bootc:fedoraproject.org, #coreos:fedoraproject.org, #devel:fedoraproject.org, #epel:fedoraproject.org, #event-devconf-cz:fedoraproject.org, #fedora:fedoraproject.org, #fedora-arm:matrix.org, #fedora-forgejo:fedoraproject.org, #fosdem:fedoraproject.org, #flock:fedoraproject.org, #golang:fedoraproject.org, #iot:fedoraproject.org, #meeting:fedoraproject.org, #meeting-1:fedoraproject.org, #mobility:fedoraproject.org, #python:fedoraproject.org, #rust:fedoraproject.org, #silverblue:fedoraproject.org, #sway:fedoraproject.org, #websites:fedoraproject.org

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I have been part of the Fedora community for many years now: my FAS account dates back to January 2010 (over 15 years ago!), and I’ve contributed in many different roles to the Fedora project. I started as an ambassador, then became a packager and packaging mentor, and joined multiple SIGs, including Golang, Sway, and Atomic Desktop. For many years, I’ve been interested in immutable Linux desktops, Mobile Linux, and packaging challenges for “new” languages (such as Go), which are also becoming more relevant in the Fedora community now. Having contributed to the Fedora Project for a long time in many different areas, and given my experience and interest in other projects, I can bring those perspectives to FESCo.

How do you currently contribute to Fedora? How does that contribution benefit the community?

Currently, many of my contributions fall in the packaging area: I keep updating the packages I administer and exploring different solutions for packaging new languages and maintaining the Sway artifacts.
My current contributions are important to keeping Fedora first, not only in terms of package versions but also in terms of best practices and ways to reach our users.

Additionally, I served for the last two cycles (F41/F42) as a FESCo member, steering the community toward engineering decisions that were both sensible in the short and long term.

How do you handle disagreements when working as part of a team?

I think disagreements are normal in communities. I have a few beliefs that guide me in entering and during any disagreement:

  1. I always separate the person from their argument: this allows me to discuss the topic without being influenced by the person making the points.
  2. I always keep in mind during disagreements that all people involved probably have a lot of things they agree on and a few they don’t agree on (otherwise, they would not be part of the conversation in the first place): this allows me to always see the two sides of the disagreement as having way more in common than in disagreement.
  3. During a discussion, I always hold the belief that the people arguing on the opposite side of the disagreement are trying to make sure that what they believe is right becomes a reality: this allows me always to try to see if there are aspects in their point of view that I had not considered or not appropriately weighted.

Thanks to my beliefs, I always manage to keep disagreements civil and productive, which often leads to a consensus. It is not always possible to agree on everything, but it is always possible to disagree in a civil, productive way.

What else should community members know about you or your positions?

Let’s start with the fact that I’m a Red Hat employee, though what I do in my day job has nothing to do with Fedora (I’m an Ansible specialist, so I have nothing to do with RHEL either), so I have no ulterior motives for my contributions. I use Fedora on many devices (starting from my laptop) and have done so for many years. I contribute to the Fedora Project because I found in it and its community the best way to create the best operating system :).

I’ve been using Sway exclusively on my Fedora desktop since I brought it into Fedora in 2016. On the other systems, I use either Fedora Server, Fedora CoreOS, or Fedora IoT, even though lately, I prefer the latter for all new non-desktop systems.

I see the Fedora Community as one community within a sea of communities (upstream, downstream, similarly located ones, etc.). I think the only way for all those communities to be successful is to collaborate, creating a higher-level community where open-source communities collaborate for the greater good, which, in my opinion, would be a more open-source world.

The post F43 FESCo Elections: Interview with Fabio Alessandro Locati (fale) appeared first on Fedora Community Blog.

Image

F43 FESCo Elections: Interview with Dave Cantrell (dcantrell)

Posted by Fedora Community Blog on 2025-12-17 08:01:00 UTC

This is a part of the Fedora Linux 43 FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts today, Wednesday 17th December and closes promptly at 23:59:59 UTC on Wednesday, 7th January 2026.

Interview with Dave Cantrell

  • FAS ID: dcantrell
  • Matrix Rooms: Looking right now it appears Fedora Council, FRCL, Introductions, Announcements, Fedora Meeting, and Fedora Meeting 1. I tend to go to rooms that people ask me to join. I also use it a lot for DMs and people find me that way. For me primarily I rely on Matrix for our online meetings and DMs with people. Email continues to be the most reliable way to reach me and have a conversation.

Questions

Why do you want to be a member of FESCo and how do you expect to help steer the direction of Fedora?

I have been a member of FESCo for a while now and enjoy doing it. Fedora is really good at bringing in new technologies and ensuring that we minimize disruption for users. I enjoy the technical discussions and working together to ensure that changes account for everything before we bring them in. Making and having a plan is often difficult and requires a lot of coordination.

I am also interested in mentoring people interested in running for FESCo and introducing some changes to how we staff FESCo. There are discussions going on right now for that, but an important thing for me is ensuring we have a succession plan for FESCo that keeps Fedora going without burning people out. If you are interested in being on FESCo, please reach out to me!

Lastly, I feel very strongly about open source software and the licenses we have around it. I believe that it has fundamentally changed our industry and made it a better place. We continue to see changes come in to Fedora that bring challenges to those ideas and I want to ensure that Fedora’s position around open source, creator rights, and licensing are not lost or eroded.

How do you currently contribute to Fedora? How does that contribution benefit the community?

My job at Red Hat is working on the Software Management team. The two big projects on that team are dnf and rpm. But we also have a lot of dnf and rpm adjacent software. I am upstream for or contribute to numerous other projects. I also maintain a variety of packages in Fedora and EPEL as well as RHEL (and by extension CentOS Stream).

I am a sponsor for new contributors and I help mentor new developers in both the community and at Red Hat (that is, developers at Red Hat wanting to participate more in Fedora).

I am a member of the Fedora Council where I focus on engineering issues when we discuss large topics and strategy.

How do you handle disagreements when working as part of a team?

Communication has always been a challenge in our industry and community. We have language differences, cultural differences, and communication medium differences. One thing I notice a lot is that some discussions lead to people taking things personally. Often the root cause of that is people feeling like they are not being heard. A solution I have found is to suggest changing the communication medium. I am perfectly fine communicating over email, or chat, or other online methods. But talking in person can go a long way. We know the value of having in-person events and a lot of people find that their interactions with people in the community improve simply because they finally met someone in person at an event. While that is not always possible, we do have video conference capabilities these days. I do use that in Fedora and it helps quite a bit.

For everyone, if you find yourself in a frustrating situation, I recommend first stepping away and collecting your thoughts. Then remind yourself why everyone is involved in the first place. We all want to achieve the same things, so let’s try to work towards that and find common ground. And if necessary, suggest an alternate communication mechanism.

What else should community members know about you or your positions?

Most people are surprised to learn that I support protons more than electrons. I like being positive in everything I pursue. It’s ok for us to disagree. It’s ok to have a position, learn something new, and then change that position. The important thing to me is that Fedora ultimately remains a fun project.

My favorite color is orange. I use an Android mobile phone. I do not use current Apple hardware, but I am a big fan of the Apple II series and 68k Macintosh series. If you corner me, I will likely talk your ear off about the Apple IIgs or any Macintosh Quadra (particularly the various crazy and horrible operating systems Apple made for the platform).

The post F43 FESCo Elections: Interview with Dave Cantrell (dcantrell) appeared first on Fedora Community Blog.