SSCG 4.0.0 Release Announcement

SSCG 4.0.0 Release Announcement

We are excited to announce the release of SSCG 4.0.0! This major release brings significant new features, modernization improvements, and important breaking changes.

🎉 Highlights

Post-Quantum Cryptography Support

SSCG now supports ML-DSA (Module-Lattice-Based Digital Signature Algorithm) key generation, bringing post-quantum cryptography capabilities to the tool. This ensures future-readiness against quantum computing threats.

ECDSA Key Support

In addition, SSCG now supports ECDSA (Elliptic Curve Digital Signature Algorithm) key generation, providing modern cryptographic options with smaller key sizes and improved performance.

Enhanced Command-Line Interface

The help output has been completely reorganized into logical groups, making it significantly easier to discover and use the various options available.

✨ New Features

  • ML-DSA Key Generation: Generate post-quantum cryptographic keys with OpenSSL 3.5+
    • New command-line arguments for ML-DSA configuration
    • Proper handling of ML-DSA signing semantics (digest-less operation)
  • ECDSA Key Generation: Generate elliptic curve keys
    • Support for various EC curves
    • Enhanced CLI arguments for EC-DSA configuration
  • Enhanced Security: Minimum RSA key strength for private CA raised to 4096 bits (matches service certificate if set higher)

🔧 Internal Improvements

  • Refactored Key Creation: Separated key creation logic from certificate creation for better modularity and multi-algorithm support
  • Enhanced Testing:
    • Separate validity tests for RSA, ECDSA, and ML-DSA certificates
    • Extended test coverage for CA and certificate creation with new key types
  • Improved Code Organization: Logging functionality split into its own header and implementation files
  • Better Code Formatting: Updated clang-format configuration for improved consistency

🚨 Breaking Changes

DH Parameters Changes

  • No longer generates DH parameters file by default (Fixes #91)
    • DH parameters were always generated by default for backwards compatibility, but this was never the desired behavior
    • Use the --dhparams-file argument if you explicitly need DH parameters
  • Custom DH parameter generation deprecated (Fixes #88)
    • --dhparams-prime-len argument still works for now but it is hidden from the documentation
    • This option will be removed in SSCG 5.0

Removed Options

  • Dropped --package argument: This option was deprecated in SSCG 3.0 and has been completely removed in 4.0 as it has been meaningless for years

Build Requirements

  • Minimum OpenSSL version: 3.x: Dropped compatibility with OpenSSL 1.1 and 2.x
  • Updated C standard: Now requires C standard support of C17 + GNU extensions (gcc 11+, clang 6+)
  • Removed pkgconfig dependency: Unused dependency has been dropped

🔍 Bug Fixes

  • Fixed NULL pointer dereference issues in tests (Coverity #1648023)
  • Fixed formatting issues throughout the codebase
  • Various code quality improvements

🏗️ Infrastructure

  • CI now tests on Fedora ELN in addition to other platforms
  • CI runs are no longer restricted to the main branch
  • Updated GitHub Actions checkout action to v5
  • Build and test processes improved for container environments

📝 Requirements

  • OpenSSL 3.x or later
  • C compiler with C17 + GNU extensions standard support
  • Meson build system

📥 Getting SSCG 4.0.0

Source tarballs and additional information are available at:


For bug reports and feature requests, please visit our issue tracker.

For information on contributing to SSCG, please see our CONTRIBUTING.md guide.

Full Changelog: sscg-3.0.8…sscg-4.0.0

Image

SSCG 3.0.8: Brought to you in part by Claude

This past week, as part of a series of experiments that I’ve been running to evaluate the viability of AI-supported coding, I decided that I would spend some time with the Cursor IDE, powered by the Claude 4 Sonnet large language model. Specifically, I was looking to expand the test suite of my SSCG project (more detail in this blog entry) to exercise more of the functionality at the low-level. I figured that this would be a good way to play around with AI code-generation, since the output won’t be impacting the actual execution of the program (and thus shouldn’t be able to introduce any new bugs in the actual execution.

The first thing I asked the AI tool to do was to create a unit test for the sscg_init_bignum(). I gave it no more instruction than that, in order to see what Claude made of that very basic prompt (which, you may note, also did not include any information about where that function was defined, where I wanted the test to be written or how it should be executed. I was actually quite surprised and impressed by the results. The first thing that it did was inspect the code of the rest of the project to familiarize itself with the layout and coding style of SSCG and then it proceeded to generate an impressively comprehensive set of tests, expanding on the single, very basic test I had in place previously. It created tests to verify initialization to zero; initialization to non-zero numeric values; initialization to the largest possible valid value; and it verified that none of these calls resulted in leaked or other memory errors. In fact, I only needed to make two minor changes to the resulting code:

  1. The whitespace usage and line length was inconsistent and rather ugly. A quick pass through clang-format resolved that trivially.
  2. There was a small ordering problem at the beginning of execution that resulted in the memory-usage check believing it was successfully freeing more memory than it had allocated, which was concerning until I realized that Claude had initialized the root memory context before starting the memory-usage recording functions.

All in all, this first experiment was decidedly eye-opening. My initial expectation was that I was going to need to write a much more descriptive prompt in order to get a decent output. I had mostly written the abbreviated one to get a baseline for how poorly it would perform. (I considered making popcorn while I watched it puzzle through.) It was quite a shock to see it come up with such a satisfactory answer from so little direction.

Naturally, I couldn’t resist going a bit further, so I decided to see if it could write some more complicated tests. Specifically, I had a test for create_ca() that I had long been meaning to extend to thoroughly examine the subjectAlternativeName (aka SAN) handling, but I kept putting it off because it was complicated to interact with through the OpenSSL API calls. Sounds like a perfect job for a robot, no? Claude, you’re up!

Unfortunately, I don’t have a record of the original prompt I used here, but it was something on the order of “Extend the test in create_ca_test.c to include a comprehensive test of all values that could be provided to subjectAlternativeName.” Again, quite a vague test, but it merrily started turning the crank, analyzing the create_ca() function in SSCG and manufacturing a test that provided at least one SAN in each of the possible formats and then verified that the produced certificate contained both the appropriate subjectAlternativeName fields and the CA certificate also provided the appropriate basicConstraints fields, as required by SSCG’s patented algorithm.

However, as I watched it “think”, I noticed something peculiar. It went and ran its test… and it failed. At first, I guessed that it had just made a mistake (it seemed to think so, as it re-evaluated its code), but when it didn’t see a logic problem, it expanded its search to the original function it was testing. It turned out that there was a bug in create_ca(): it turned out that it was improperly stripping out the slash from URI: type SANs, where it should only have been doing that for IP: types. The AI had discovered this… but it made a crucial mistake here. It went back and attempted to rewrite the test in such a way as to work around the bug, rather than to properly fail on the error and flag it up. I probably would have noticed this when I reviewed the code it produced, but I’m still glad I saw it making that incorrect decision in the chatbot output and interrupted to amend the prompt to tell it not to work around test failures.

That inadvertently led to its second mistake, and I think this one is probably an intentional design choice: when writing tests, Claude wants to write tests that pass, because that way it has a simple mechanism to determine whether what it wrote actually works. Since there was a known failure, when I told Claude not to work around it, the AI then attempted to rewrite the test so that it would store its failures until the end of execution and then print a summary of all of the failed tests rather than fail midway through. I assume it did this so that it could use the expected output text (rather than the error code) as a mechanism to identify if the tests it wrote were wrong, but it still resulted in a much more (needlessly) complicated test function. Initially, I was going to keep it anyway, since the summary output was fairly nice, but unfortunately, it turned out there were also bugs in it; in several places, the “saved” failures were being dropped on the proverbial floor and never reported. A Coverity scan reported these defects and I amended the prompt yet again to instruct it not to try to save failures for later reporting and the final set of test code was much cleaner and easier to review.

All in all, I think this experiment was a success. I won’t go into exhaustive detail on the rest of the 3.0.8 changes, but they followed much the same as the above two examples: AI provided a surprisingly close output to what I needed and I then examined everything carefully and tweaked it to make sure it worked properly. It vastly shortened the amount of time I needed to spend on generating unit tests and it helped me get much closer to 100% test coverage than I had a week ago. It even helped me identify a real bug, in however roundabout a way. I will definitely be continuing to explore this and I will try to blog on it some more in the future.

Trip Report: Flock to Fedora 2025

Another year, another Fedora contributor conference! This year, Flock to Fedora returned to Prague, Czechia. It’s a beautiful city and always worth taking a long walk around, which is what many of the conference attendees did the day before the conference started officially. Unfortunately, my flight didn’t get in until far too late to attend, but I’m told it was a good time.

Day One: The Dawn of a New Era

After going through the usual conference details, including reminders of the Code of Conduct and the ritual Sharing of the WiFI Password, Flock got into full swing. To start things off, we had the FPL Exchange. Once a frequent occurence, sometimes only a few short years apart, this year saw the passing of the torch from Matthew Miller who has held the position for over eleven years (also known as “roughly as long as all of his predecessors, combined”) to his successor Jef Spaleta.

In a deeply solemn ceremony… okay, I can’t say that with a straight face. Our new Fedora Project Leader made his entrance wearing a large hotdog costume, eliciting laughter and applause. Matthew then proceeded to anoint the new FPL by dubbing him with a large, Fedora Logo-shaped scepter. Our new JefPL gave a brief overview of his career and credentials and we got to know him a bit.

After that, the other members of FESCo and myself (except Michel Lind, who was unable to make it this year) settled in for a Q&A panel with the Fedora community as we do every year. Some years in the past, we’ve had difficulty filling an hour with questions, but this time was an exception. There were quite a few important topics on peoples’ minds this time around and so it was a lively discussion. In particular, the attendees wanted to know our stances on the use of generative AI in Fedora. I’ll briefly reiterate what I said in person and during my FESCo election interview this year: My stance is that AI should be used to help create choices. It should never be used to make decisions. I’ll go into that in greater detail in a future blog post.

After a brief refreshment break, the conference launched into a presentation on Forgejo (pronounced For-jay-oh, I discovered). The talk was given by a combination of Fedora and upstream developers, which was fantastic to see. That alone tells me that the right choice was made in selecting Forgejo for out Pagure replacement in Fedora. We got a bit of history around the early development and the fork from Gitea.

Next up was a talk I had been very excited for. The developers of Bazzite, a downstream Fedora Remix focused on video gaming, gave an excellent talk about the Bootc tools underpinning it and how Fedora provided them with a great platform to work with. Bazzite takes a lot of design cues from Valve Software’s SteamOS and is an excellent replacement OS for the sub-par Windows experience on some of the SteamDeck’s competitors, like the Asus Rog Ally series. It also works great on a desktop for gamers and I’ve recommended it to several friends and colleagues.

After lunch, I attended the Log Detective presentation, given by Tomas Tomecek and Jiri Podivin. (Full disclosure: this is the project I’m currently working on.) They talked about how we are developing a tool to help package maintainers quickly process the logs of build failures to save time and get fixes implemented rapidly. They made sure to note that Log Detective is available as part of the contribution pipeline for CentOS Stream now and support for Fedora is coming in the near future.

After that, I spent most of the remainder of the day involved in the “Hallway Track”. I sat down with quite a few Fedora Friends and colleagues to discuss Log Detective, AI in general and various other FESCo topics. I’ll freely admit that, after a long journey from the US that had only gotten in at 1am that day, I was quite jet-lagged and have only my notes to remember this part of the day. I went back to my room to grab a quick nap before heading out to dinner at a nearby Ukrainian restaurant with a few old friends.

That evening, Flock held a small social event at an unusual nearby pub. GEEKÁRNA was quite entertaining, with some impressive murals of science fiction, fantasy and videogame characters around the walls. Flock had its annual International Candy Swap event there, and I engaged in my annual tradition of exchanging book recommendations with Kevin Fenzi.

Day Two: To Serve Man

Despite my increasing exhaustion from jet lag, I found the second day of the conference to be exceedingly useful, though I again did not attend a high number of talks. One talk that I made a particular effort to attend was the Fedora Server Edition talk. I was quite interested to hear from Peter Boy and Emmanuel Seyman about the results of the Fedora Server user survey that they conducted over the past year. The big takeaway there was that a large percentage of Fedorans use Fedora Server as a “home lab server” and that this is a constituency that we are under-serving today.

After the session, I sat down with Peter, Emmanuel and Aleksandra Fedorova and we spent a long while discussing some things that we would like to see in this space. In particular, we suggested that we want to see more Cockpit extensions for installing and managing common services. In particular, what I pitched would be something like an “App Store” for server applications running in containers/quadlets, with Cockpit providing a simple configuration interface for it. In some ways, this was a resurrection of an old idea. Simplifying the install experience for popular home lab applications could be a good way to differentiate Fedora Server from the other Editions and bring some fresh interest to the project.

After lunch, I spent most of the early afternoon drafting a speech that I would be giving at the evening event, with some help from Aoife Moloney and a few others. As a result, I didn’t see many of the talks, though I did make sure to attend the Fedora Council AMA (Ask Me Anything) session.

The social event that evening was a boat cruise along the Vltava River, which offered some stunning views of the architecture of Prague. As part of this cruise, I also gave a speech to honor Matthew Miller’s time as Fedora Project Leader and wish him well on his next endeavors at Red Hat. Unfortunately, due to technical issues with the A/V system, the audio did not broadcast throughout the ship. We provided Matthew with a graduation cap and gown and Aoife bestowed upon him a rubber duck in lieu of a diploma.

Day Three: Work It!

The final day of the conference was filled with workshops and hacking sessions. I participated in three of these, all of which were extremely valuable.

The first workshop of the day was for Log Detective. Several of the attendees were interested in working with the project and we spent most of the session discussing the API, as well as collecting some feedback around recommendations to improve and secure it.

After lunch, I attended the Forgejo workshop. We had a lengthy (and at times, heated) discussion on how to replace our current Pagure implementation of dist-git with a Forgejo implementation. I spent a fair bit of the workshop advocating for using the migration to Forgejo as an opportunity to modernize our build pipeline, with a process built around merge requests, draft builds and CI pipelines. Not everyone was convinced, with a fair number of people arguing that we should just reimplement what we have today with Forgejo. We’ll see how things go a little further down the line, I suppose.

The last workshop of the day was a session that Zbigniew Jędrzejewski-Szmek and I ran on eliminating RPM scriptlets from packages. In an effort to simplify life for Image Mode and virtualization (as well as keep updates more deterministic), Zbigniew and I have been on a multi-year campaign to remove all scriptlets from Fedora’s shipped RPMs. Our efforts have borne fruit and we are now finally nearing the end of our journey. Zbigniew presented on how systemd and RPM now has native support for creating users and groups, which was one of the last big usages of scriptlets. In this workshop, we solicited help and suggestions on how to clean up the remaining ones, such as the use of the alternatives system and updates for SELinux policies. Hopefully by next Flock, we’ll be able to announce that we’re finished!

With the end of that session came the end of Flock. We packed up our things and I headed off to dinner with several of the Fedora QA folks, then headed back to my room to sleep and depart for the US in the morning. I’d call it time well spent, though in the future I think I’ll plan to arrive a day earlier so I’m not so tired on the first day of sessions.

Image

“No one remembers ‘that one day I spent in front of the TV'”

Yesterday, as I was scrolling on social media I happened across a post. It was one of those basic “nostalgic picture with an excessively generic quote” posts that seems to be “recommended” every seventh entry or so. This one caught my eye, though for reasons I couldn’t immediately describe. It was a picture of kids riding their bicycles down a suburban street with the quote (paraphrased, since I don’t recall the exact wording): “No one remembers ‘that one day I spent in front of the TV'”. Obviously, the message was “go outside and do stuff”, but the arrogance and condescension implicit in the phrasing bothered me. It kept running through my mind all day, and I couldn’t quite put it down.

It wasn’t until the middle of the night last night, as I awoke from some dream or another that it hit me why it bothered me so much. It isn’t just that I’ve spent so much of my life in front of screens (I am a professional software engineer, after all). It’s that subconsciously I was realizing just how many “days in front of the TV” I really do have fond memories of. I think my mind was trying to clue me in to how insulting that meme was. The implication was that my memories are somehow lesser than those taking place outside my home.

I woke up in the middle of the night and couldn’t let it go. I decided that I was going to write this up today, because I feel like I need to chronicle my experience — for my own reasons, yes, but also for anyone else who might feel the same way. I’ve known plenty of people who were belittled or mocked while we were growing up for enjoying videogames and TV more than sports and wandering. So I decided I was going to write down some of the great memories that I have had in my life “in front of the TV”. Some of these dates might be approximate (or flat-out wrong!), but that’s the nature of memory: the feelings persist, even when the details fade.

Christmas, 1993: Under the Christmas tree, my brother and I tore open a package to discover Super Mario Kart. We were ecstatic. We convinced my parents to let us bring the Super Nintendo with us to my grandparents house so we could play it. We plugged it into a tiny TV in my grandparents’ bedroom and raced and raced while the grown-ups talked and drank and made merry in the other room. It would be weeks before we ever saw the final track of any of the courses. Months longer before we discovered the Special Cup. We spent hours and hours in front of that game. I’m not sure I’ve ever been closer to my brother than we were then.

Fall, 1998: I have just begun college and I have made some great friends. We share a common interest in Action Quake 2. We form a four-person team named (loosely) after the dorm hall most of them lived in. In the era before ubiquitous laptops, I dragged my desktop computer down to the dorm when we played. Laughing and jeering, we spent many an evening like that. I miss those friends (one in particular who is no longer with us).

Winter, 2001: My parents bought us an Xbox and Halo for Christmas, then took us up to New Hampshire to go skiing. We spent all day on the slopes and then unwound by sipping cocoa and playing Halo co-op well into the night every day of that school vacation. I think we even finished the campaign before I went back to school for the spring semester.

The latter half of the 2000s: Once a week, I gather with some of my closest friends for videogames. We order takeout, complain about nothing, and play Warcraft 2 and 3, Left4Dead 2, Moonbase Commander and more. This tradition continues, waxing and waning, to this day. Participants have come and gone, but the joy endures. Tonight I’ll be playing Baldur’s Gate 3 with some of the original members of this tribe.

February 23rd, 2011: My wife and I are playing a Tiger Woods Golf on the Nintendo Wii. She is more than nine months pregnant. The daughter we have tried so hard for years to bring into the world is being stubborn and is a week past her due date. The movement of the golf swing induces her labor and we finally welcome my firstborn into the world the next day. When her little sister is similarly stubborn two and a half years later, the same golf game helps bring her into the world too.

Winter and Spring, 2022: I am sitting in the living room, playing the wonderful cooperative game It Takes Two from beginning to end with each of my daughters. We’re working together and helping each other through it and loving every minute of it. It brings me back to my own childhood.

Last night, after dinner, I asked my eldest if she wanted to do something together. She asked if I wanted to design a theme park together in Planet Coaster. You’re damned right I did.

In the end, it’s not the activity that matters. It’s the people you spend time with that do.

One Week With KDE Plasma Workspaces 6 on Fedora 40 Beta (Vol. 2)

Checking In

It’s been a few days since my first entry in this series. For the most part, things have been going quite smoothly. I have to say, I am liking KDE Plasma Workspaces 6 much better than previous releases (which I dabbled with but admittedly did not spend a significant amount of time using). The majority of what I want to do here Just Works. This should probably not come as a surprise to me, but I’ve been burned before when jumping desktops.

I suppose that should really be my first distinct note here: the transition from GNOME Desktop to KDE Plasma Workspaces has been minimally painful. No matter what, there will always be some degree of muscle memory that needs to be relearned when changing working environments. It’s as true going from GNOME to KDE as it is from Windows to Mac, Mac to ChromeOS and any other major shift. That said, the Fedora Change that prompted this investigation is specifically about the possibility of changing the desktop environment of Fedora Workstation over to using KDE Plasma Workspaces and away from GNOME. As such, I will be keeping in mind some of the larger differences that users would face in such a transition.

Getting fully hooked up

The first few days of this experience, I spent all of my time directly at my laptop, rather than at my usual monitor-and-keyboard setup. This was because I didn’t want to taint my initial experience with potential hardware-specific headaches. My main setup involves a very large 21:9 aspect monitor, an HDMI surround sound receiver and a USB stereo/mic headset connected via a temperamental USB 3.2/Thunderbolt hub and the cheapest USB A/B switch imaginable (I share these peripherals with an overpowered gaming PC). So when I put aside my usual daily driver and plugged my Thinkpad into the USB-C hub, I was prepared for the worst. At the best of times, Fedora has been… touchy about working with these devices.

Let’s start with the good bits: When I first connected the laptop to my docking station, I was immediately greeted by an on-screen display asking me how I wanted to handle the new monitor. Rather than just making a guess between cloning or spanning the desktop, it gave me an easy and visual prompt to do so. Unfortunately, I don’t have a screenshot of this, as after the first time it seems that the system “remembers” the devices and puts them back the way I had them. This is absolutely desirable for the user, but as a reviewer it makes it harder to show it off. (EDIT: After initial publication, I was informed of the meta-P shortcut which allowed me to grab this screenshot)

Image

Something else that I liked about the multi-monitor support was the way that the virtual desktop space on the taskbar automatically expanded to include the contents from both screens. It’s a simple thing, but I found that it made it really easy to tell at a glance which desktop I had particular applications running on.

All in all, I want to be clear here: the majority of my experience with KDE Plasma Workspaces has been absolutely fine. So many things work the same (or close enough) to how they work in GNOME that the transition has actually been much easier than I expected. the biggest workflow changes I’ve encountered are related to keyboard shortcuts, but I’m not going to belabor that, having discussed it in the first entry. The one additional keyboard-shortcut complaint I will make is this: using the “meta” key and typing an application name has a strange behavior that gets in my way. It almost behaves identically to GNOME; I tap “meta” and start typing and then hit enter to proceed. But the issue I have with KDE is this: I’m a fast typist and the KDE prompt doesn’t accept <enter> until the visual effect of opening the menu completes. This baffles me, as it accepts all of the other keys. So my muscle memory to launch a terminal by quickly tapping “meta”, typing “term” and hitting enter doesn’t actually launch the terminal. It leaves me at the menu with konsole sitting there. When I hit enter after the animation completes, it works fine. So while the behavior isn’t wrong, per se, it’s frustrating. The fact that it accepts the other characters makes me think this was a deliberate choice that I don’t understand.

There have been a few other issues, mostly around hardware support. I want to be clear: I’m fully aware that hardware is hard. One issue in particular that has gotten in the way is support for USB and HDMI sound devices in KDE Plasma. I don’t know if it’s specifically my esoteric hardware or a more general problem, but it has been very hard to get KDE to use the correct inputs and outputs. In the case of the HDMI audio receiver, I still haven’t been able to get KDE to present it as an output option in the control panel. It connects to the receiver and treats it as a very basic 720p video output device, but it just won’t recognize it as an audio output device. My USB stereo headset with mic has also been more headache than headset: after much trial and error, I’ve managed to identify the right output to send stereo output to it, but no matter what I have fiddled with, it does not recognize the microphone.

More issues on the hardware front are related to having two webcam devices available. KDE properly detects both the built-in camera on the laptop as well as the external webcam I have clipped to the top of my main monitor, but it seems to have difficulty switching between them. I’m not yet 100% sure how much of this is a KDE problem and how much a Firefox problem, but it is frustrating. Sometimes I’ll select my external webcam and it will still be taking input from the built-in camera. Also, it seems to always show two entries for both devices. I need to do more digging here, but I anticipate that I’ll be filing a bug report once I gather enough data.

Odds and Ends

I have mixed feelings about KDE’s clipboard applet in the toolbar. On the one hand, I can certainly see the convenience of digging into the clipboard history, particularly if you accidentally drag-select something and replace the clipboard copy you intended to keep. On the other hand, as a heavy user of Bitwarden who regularly copies passwords1 out of the wallet and into other applications, the fact that all of the clipboard contents are easily viewable in plaintext to anyone walking by if I forget to lock my screen for a few seconds is quite alarming. I’m pretty sure I’ll either have to disable this applet or build a habit of clearing it any time I copy a password. Probably the former, as I don’t like the fact that I have to call up and make the plaintext visible first in order to delete it without clearing the entire history anyway.

Conclusion

This will probably seem odd after a post that mostly contained complaints and nitpicks, but I want to reiterate: my experience over the last several days has actually been quite good. When dealing with a computer, I consider “it was boring” to be the highest of praise. Using KDE has not been a life-altering experience. It has been a stable, comfortable environment in which to get work done. Have I experienced some issues? Absolutely. None of them are deal-breakers, though the audio issues are fairly annoying. My time in the Fedora Project has shown me that hardware issues inevitably get fixed once they are noticed, so I’m not overly worried.

As for me? I’m going to stick around in KDE for a while and see how things play out. If you’re reading this and you’re curious, I’ll happily direct you to the Fedora KDE Spin for the Live ISO or the Kinoite installer if, like me, you enjoy an atomic update environment. Make sure to select “Show Beta downloads” to get Plasma 6!

  1. I generate high-entropy, unique random passwords for everything. Don’t you? ↩︎

One Week With KDE Plasma Workspaces 6 on Fedora 40 Beta (Vol. 1)

Why am I doing this?

As my readers may be aware, I have been a member of the Fedora Engineering Steering Committee (FESCo) for over a decade. One of the primary responsibilities of this nine-person body is to review the Fedora Change Proposals submitted by contributors and provide feedback as well as being the final authority as to whether those Changes will go forth. I take this responsibility very seriously, so when this week the Fedora KDE community brought forth a Change Proposal to replace GNOME Desktop with KDE Plasma Workspaces as the official desktop environment in the Fedora Workstation Edition, I decided that I would be remiss in my duties if I didn’t spend some serious time considering the decision.

As long-time readers of this blog may recall, I was a user of the KDE desktop environment for many years, right up until KDE 4.0 arrived. At that time, (partly because I had recently become employed by Red Hat), I opted to switch to GNOME 2. I’ve subsequently continued to stay with GNOME, even through some of its rougher years, partly through inertia and partly out of a self-imposed responsibility to always be running the Fedora/Red Hat premier offering so that I could help catch and fix issues before they got into users’ and customers’ hands. Among other things, this led to my (fairly well-received) series of blog posts on GNOME 3 Classic. As it has now been over ten years and twenty(!) Fedora releases, I felt like it was time to give KDE Plasma Workspaces another chance with the release of the highly-awaited version 6.0.

How will I do this?

I’ve committed to spending at least a week using KDE Plasma Workspaces 6 as my sole working environment. This afternoon, I downloaded the latest Fedora Kinoite installer image and wrote it to a USB drive.1 I pulled out a ThinkPad I had lying around and went ahead with the install process. I’ll describe my setup process a bit below, but (spoiler alert) it went smoothly and I am typing up this blog entry from within KDE Plasma.

What does my setup look like?

I’m working from a Red Hat-issued ThinkPad T490s, a four-core Intel “Whiskey Lake” x86_64 system with 32 GiB of RAM and embedded Intel UHD 620 graphics. Not a powerhouse by any means, but only about three or four years old. I’ve wiped the system completely and done a fresh install rather than install the KDE packages by hand onto my usual Fedora Workstation system. This is partly to ensure that I get a pristine environment for this experimen and partly so I don’t worry about breaking my existing system.

Thoughts on the install process

I have very little to say about the install process. It was functionally identical to installing Fedora Silverblue, with the minimalist Anaconda environment providing me some basic choices around storage (I just wiped the disk and told it to repartition it however it recommends) and networking (I picked a pithy hostname: kuriosity). That done, I hit the “install” button, rebooted and here we are.

First login

Upon logging in, I was met with the KDE Welcome Center (Hi Konqi!), which I opted to proceed through very thoroughly, hoping that it would provide me enough information to get moving ahead. I have a few nitpicks here:

First, the second page of the Welcome Center (the first with content beyond “this is KDE and Fedora”) was very sparse, saying basically “KDE is simple and usable out of the box!” and then using up MOST of its available screen real estate with a giant button directing users to the Settings app. I am not sure what the goal is here: it’s not super-obvious that it is a button, but if you click on it, you launch an app that is about as far from “welcoming” as you can get (more on that later). I think it might be better to just have a little video or image here that just points at the settings app on the taskbar rather than providing an immediate launcher. It both disrupts the “Welcome” workflow and can make less-technical users feel like they may be in over their heads.

Image

I actually think the next page is a much better difficulty ramp; it presents some advanced topics that they might be interested in, but it doesn’t look quite as demanding of them and it doesn’t completely take the user out of the workflow.

Image

Next up on the Welcome Center was something very welcome: an introduction to Discover (the “app store”). I very much like this (and other desktop environments could absolutely learn from it). It immediately provides the user with an opportunity to install some very popular add-ons.2

Image

The next page was a bit of a mixed bag for me. I like that the user is given the option to opt-in to sharing anonymous user information, but I feel like the slider and the associated details it provided are probably a bit too much for most users to reasonably parse. I think this can probably be simplified to make it more approachable (or at least bury the extra details behind a button; I had to extend the window from its default size to get a screenshot).

Image

At the end of the Welcome Center was a page that gave me pause: a request for donations to the KDE project. I’m not sure this is a great place for it, since the user hasn’t even spent any time with the environment at all yet. It seems a bit too forwards with asking for donations. I’m not sure where a better place is, but getting begged for spare change minutes after installing the OS doesn’t feel right. I think that if we were to make KDE the flagship desktop behind Fedora Workstation, this would absolutely have to come out. I think it gives a bad first impression. I think a far better place to leave things would be the preceding page:

Image

OK, so let’s use it a bit!

With that out of the way, I proceeded to do a bit of setup for personal preferences. I installed my preferred shell (zsh) and some assorted CLI customizations for the shell, vi, git, etc. This was identical to the process I would have followed for Silverblue/GNOME, so I won’t go into any details here. I also have a preference for touchpad scrolling to move the page (like I’m swiping a touch-screen), so I set that as well. I was confused for a bit as it seemed that wasn’t having an effect, but I realized I had missed that “touchpad” was a separate settings page from “mouse” and had flipped the switch on the wrong devices. Whoops!

In the process of setting things up to my liking, I did notice one more potential hurdle for newcomers: the default keyboard shortcuts for working with desktop workspaces are different from GNOME, MacOS and Windows 11. No matter which major competitor you are coming from, this will cause muscle-memory stumbles. It’s not that any one approach is better than another, but the fact that they are all completely different makes me sigh and forces me to think about how I’m interacting with the system instead of what I want to do with it. Unfortunately, KDE did not make figuring this out easy on me; even when I used the excellent desktop search feature to find the keyboard shortcut settings, I was presented by a list of applications that did not clearly identify which one might contain the system-wide shortcuts. By virtue of past experience with KDE, I was able to surmise that the KWin application was the most likely place, but the settings app really didn’t seem to want to help me figure that out. Then, when I selected KWin, I was presented with dozens of pages of potential shortcuts, many of which were named similarly to the ones I wanted to identify. This was simply too many options with no clear way to sort them. I ended up resorting to trying random combinations of ctrl, alt, meta and shift with arrow keys until I eventually stumbled upon the correct set.

Next, I played around a bit with Discover, installing a pending firmware update for my laptop (which hadn’t been turned on in months). I also enabled Flathub and installed Visual Studio Code to see how well Flatpak integration works and also for an app that I know doesn’t natively use Wayland. That was how I discovered that my system had defaulted to a 125% fractional scaling setup. In Visual Studio Code, everything looked very slightly “off” compared to the rest of the system. Not in any way I could easily put my finger to, until I remembered how badly fractional scaling behaved on my GNOME system. I looked into the display settings and, sure enough, I wasn’t at an integer scaling value. Out of curiosity, I played around with the toggle for whether to have X11 apps scale themselves or for the system to do it and found that the default “Apply scaling themselves” was FAR better looking in Visual Studio Code. At the end of the day, however, I decided that I preferred the smaller text and larger available working area afforded me by setting the scaling back to 100%. That said, if my eyesight was poorer or I needed to sit further away from the screen, I can definitely see the advantages to the fractional scaling and I was very impressed by how sharp it managed to be. Full marks on that one!

I next went to play around in Visual Studio Code with one of my projects, but when I tried to git clone it, I hit an issue where it refused my SSH key. Digging in, I realized that KDE does not automatically check for keys in the default user location (~/.ssh) and prompt for their passphrases. I went ahead and used ssh-add to manually import them into the SSH keyring and moved along. I find myself going back and forth on this; on the one hand, there’s a definite security tradeoff inherent in allowing the desktop to prompt (and offer to save) the passphrase in the desktop keyring (encrypted by your login password). I decline to save mine persistently, preferring to enter it each time. However, there’s a usability tradeoff to not automatically at least launching an askpass prompt. In any case, it’s not really an issue for me to make this part of my usual toolbox entry process, but I’m a technical user. Newbies might be a bit confused if they’re coming from another environment.

I then went through the motions of getting myself signed in to the various messaging services that I use on a daily basis, including Fedora’s Matrix. Once signed in there via Firefox, I was prompted to enable notifications, which I did. I then discovered the first truly sublime moment I’ve had with Plasma Workspaces: the ephemeral notifications provided by the desktop. The way they present themselves, off to the side and with a vibrant preview window and show you a progress countdown until they vanish is just *chef’s kiss*. If I take nothing else away from this experience, it’s that it is possible for desktop notifications to be beautiful. Other desktops need to take note here.

I think this is where I’m going to leave things for today, so I’ll end with a short summary: As a desktop environment, it seems to do just about everything I need it to do. It’s customizable to the point of fault: it’s got so many knobs to twist that it desperately needs a map (or perhaps a beginner vs. expert view of the settings app). Also, the desktop notifications are like a glass of icy lemonade after two days lost in the desert.

  1. This was actually my first hiccough: I have dozens of 4 GiB thumbdrives lying around, but the Kinoite installer was 4.2 GiB, so I had to go buy a new drive. I’m not going to ding KDE for my lack of preparedness, though! ↩︎
  2. Unfortunately I hit a bug here; it turns out that all of those app buttons will just link to the updates page in Discover if there is an update waiting. I’m not sure if this is specific to Kinoite yet. I’ll be investigating and filing a ticket about it in the appropriate place. ↩︎

Sausage Factory: Fedora ELN Rebuild Strategy

The Rebuild Algorithm (2023 Edition)

Slow and Steady Wins the Race

The Fedora ELN SIG maintains a tool called ELNBuildSync1 (or EBS) which is responsible for monitoring traffic on the Fedora Messaging Bus and listening for Koji tagging events. When a package is tagged into Rawhide (meaning it has passed Fedora QA Gating and is headed to the official repositories), EBS checks whether it’s on the list of packages targeted for Fedora ELN or ELN Extras and enqueues it for the next batch of builds.

A batch begins when there are one or more enqueued builds and at least five wallclock seconds have passed since a build has been enqueued. This allows EBS to capture events such as a complete side-tag being merged into Rawhide at once; it will always rebuild those together in a batch. Once a batch begins, all other messages are enqueued for the following batch. When the current batch is complete, a new batch will begin.

The first thing that is done when processing a batch is to create a new side-tag derived from the ELN buildroot. Into this new target, EBS will tag most of the Rawhide builds. It will then wait until Koji has regenerated the buildroot for the batch tag before triggering the rebuild of the batched packages. This strategy avoids most of the ordering issues (particularly bootstrap loops) inherent in rebuilding a side-tag, because we can rely on the Rawhide builds having already succeeded.

Once the rebuild is ready to begin, EBS interrogates Koji for the original git commit used to build each Rawhide package (in case git has seen subsequent, unbuilt changes). The builds are then triggered in the side tag concurrently. EBS monitors these builds for completion. If one or more builds in a batch fails, EBS will re-queue it for another rebuild attempt. This repeats until the same set of failures occurs twice in a row. Once all of the rebuild attempts have concluded, EBS tags all successful builds back to ELN and removes the side tag. Then it moves on to preparing another batch, if there are packages waiting.

History

In its first incarnation, ELNBuildSync (at the time known as DistroBuildSync) was very simplistic. It listened for tag events on Rawhide, checked them against its list and then triggered a build in the ELN target. Very quickly, the ELN SIG realized that this had significant limitations, particularly in the case of packages building in side-tags (which was becoming more common as the era of on-demand side-tags began). One of the main benefits of side-tags is the ability to rebuild packages that depend on one another in the proper order; this was lost in the BuildSync process and many times builds were happening out of order, resulting in packages with the same NVR as Rawhide but incorrectly built against older versions of their dependencies.

Initially, the ELN SIG tried to design a way to exactly mirror the build process in the side-tags, but that resulted in its own new set of problems. First of all, it would be very slow; the only way to guarantee that side-tags are built against the same version of their dependencies as the Rawhide version would be to perform all of those builds serially. Secondly, even determining the order of operations in a side-tag after it already happened turned out to be prohibitively difficult.

Instead, the ELN SIG recognized that the Fedora Rawhide packagers had already done the hardest part. Instead of trying to replicate their work in an overly-complicated manner, instead the tool would just take advantage of the existing builds. Now, prior to triggering a build for ELN, the tool would first tag the current Rawhide builds into ELN and wait for them to be added to the Koji buildroot. This solved about 90% of the problems in a generic manner without engineering an excessively complicated side-tag approach. Naturally, it wasn’t a perfect solution, but it got a lot further. (See below for “Why are some package not tagged into the batch side-tag?” for more details.

The most recent modification to this strategy came about as CentOS Stream 10 started to come into the picture. With the intent to bootstrap CS 10 initially from ELN, tagging Rawhide packages to the ELN tag suddenly became a problem, as CS 10 needs to use that tag event as its trigger. The solution here was not to tag Rawhide builds into Fedora ELN directly, but instead to create a new ELN side-tag target where we could tag them, build the ELN packages there and then tag the successful builds into ELN. As a result, CS 10 builds are only triggered on ELN successes.

Frequently Asked Questions

Why does it sometimes take a long time for my package to be rebuilt?

Not all batches are created equal. Sometimes, there will be an ongoing batch with one or more packages whose build takes a very long time to complete. (e.g. gcc, firefox, chromium). This can lead to up to a day’s lag in even getting enqueued. Even if your package was part of the same batch, it will still wait for all packages in the batch to complete before the tag occurs.

Why do batches not run in parallel?

Simply put, until the previous batch is complete, there’s no way to know if a further batch relies on one or more changes from the previous batch. This is a problem we’re hoping might have a solution down the line, if it becomes possible to create “nested” side-tags (side-tags derived from another side-tag instead of a base tag). Today however, serialization is the only safe approach.

Why are some packages not tagged into the batch side-tag?

Some packages have known incompatibilities, such as libllvm and OCAML. The libraries produced in the ELN build and Rawhide build are API or ABI incompatible and therefore cannot be tagged in safely. We have to rely on the previous ELN version of the build in the buildroot.

Why do you not tag successes back into ELN immediately?

Not all ELN packages are built by the auto-rebuilder. Several are maintained individually for various reasons (the kernel, ceph, crypto-policies, etc.). We don’t want to tag a partial batch in out of concern that this could break these other builds.

  1. Technically, the repository is called DistroBuildSync because originally it was meant to serve multiple purposes of rebuilding ELN from Rawhide and also syncing builds for CentOS Stream and RHEL. However, the latter two ended up forking off very significantly, so we renamed ours to ELNBuildSync to reduce confusion between them. It unfortunately retains the old name for the repo at the moment due to deployment-related reasons. ↩︎

Sausage Factory: Modules – Fake it till you make it

Module Masquerade

Last week during Flock to Fedora, we had a discussion about what is needed to build a module outside of the Fedora infrastructure (such as through COPR or OBS). I had some thoughts on this and so I decided to perform a few experiments to see if I could write up a set of instructions for building standalone modules.

To be clear, the following is not a supported way to build modules, but it does work and covers most of the bases.

Step 1: Creating module-compatible RPMs

RPMs built as part of a module within Fedora’s Module Build Service are slightly different than RPMs built traditionally. In MBS, all RPMs built have an extra header injected into them: ModularityLabel. This header contains information about what module the RPM belongs to and is intended to help DNF avoid situations where an update transaction would attempt to replace a modular RPM with a non-modular one (due to a transient unavailability of the module metadata). This step may not be absolutely necessary in many cases. If you are trying to create a module from RPMs that you didn’t build, you can probably get away with skipping this step, provided that you don’t care if there might be unpredictable behavior if you encounter a broken repo mirror.

To create a module-compatible RPM, add the following line to your spec file for each binary RPM you are producing:

ModularityLabel: <arbitrary string>

Other than that new RPM label, you don’t need to do anything else. Just build your RPMs and then create a yum repository using the createrepo_c tool. The ModularityLabel can be any string at all. In Fedora, we have a convention to use name:stream:version:context to indicate from which build the RPM originally came from, but this is not to be relied upon. It may change at any time and it also may not be accurately reflective of the module in which it currently resides, due to component-reuse in the Module Build System.

Step 2: Converting the repo into a module

Now comes the complicated part: we need to construct the module metadata that matches the content you want in your module and then inject it into the yum repo you created above. This means that we need to generate the appropriate module metadata YAML for this repository first.

Fortunately, for this simple approach, we really only need to focus on a few bits of the module metadata specification. First, of course, we need to specify all of the required attributes: name, stream, version, context, summary, description and licenses. Then we need to look at what we want need for the artifacts, profiles and api sections.

Artifacts are fairly straightforward: you need to include the NEVRA of every package in the repository that you want to be exposed as part of the module stream. The NEVRA format is of the form examplepackage-0:0.1-5.x86_64.

Once the artifacts are all listed, you can decide if you want to create one or more profiles and if you want to identify the public API of the module.

It is always recommended to check your work with the modulemd-validator binary included in the libmodulemd package. It will let you know if you have missed anything that will break the format.

Shortcut

While drafting this walkthrough, I ended up writing a fairly simple python3 tool called repo2module. You can run this tool against a repository created as in Step 1 and it will output most of what you need for module metadata. It defaults to including everything in the api section and also creating a default profile called everything that includes all of the RPMs in the module.

Step 3: Injecting the module metadata into the repository

Once the module metadata is ready for inclusion, it can be copied into the repository from Step 1 using the following command:

modifyrepo_c --mdtype=modules modules.yaml /path/to/repodata

With that done, add your repository to your DNF/Yum configuration (or merge it into a bigger repository with mergerepo_c, provided you have version 0.13.2 or later) and run dnf module list and you should see your new module there!

 

Edit 2019-08-16: Modified the section on ModularityLabel to recognize that there is no defined syntax and that any string may be used.

Flock 2019 Trip Report

Just flew back from Flock to Fedora in Budapest, Hungary and boy are my arms tired! As always, it was an excellent meeting of the minds in Fedora. I even had the opportunity to meet my Outreachy intern, Niharika Shrivastava!

Day One – Thursday

As usual, the conference began with Matthew Miller’s traditional “State of Fedora” address wherein he uses pretty graphs to confound and amaze us. Oh, and reminds us that we’ve come a long way in Fedora and we have much further to go together, still.

Next was a keynote by Cate Huston of Automattic (now the proud owners of both WordPress and Tumblr, apparently!). She talked to us about the importance of understanding when a team has become dysfunctional and some techniques for getting back on track.

After lunch, Adam Samalik gave his talk, “Modularity: to modularize or not to modularize?”, describing for the audience some of the cases where Fedora Modularity makes sense… and some cases where other packaging techniques are a better choice. This was one of the more useful sessions for me. Once Adam gave his prepared talk, the two of us took a series of great questions from the audience. I hope that we did a good job of disambiguating some things, but time will tell how that works out. We also got some suggestions for improvements we could make, which were translated into Modularity Team tickets: here and here.

Next, Merlin Mathesius, our official Modularity Wizard, gave his talk on “Tools for Making Modules in Fedora”, focusing on various resources that he and others have created for simplifying the module packaging process.

Next, I rushed off to my annual “State of the Fedora Server” talk. This was a difficult one for me. Fedora Server has, for some time now, been operating as a largely one-man (me) effort of just making sure that the installation media continues to function properly. It has seen very little innovation and is failing in its primary mission: to provide a development ground for the next generation of open-source servers. I gave what amounted to an obituary speech and then opened the floor to discussion. The majority of the discussion came down to this: projects can only survive if people want to work on them and there really isn’t a clear idea of what that would be in the server space. Fedora Server is going to need to adapt or dissipate. More on that in a future update.

Later that afternoon, I attended Brendan Conoboy’s talk “Just in Time Transformation” where he discussed the internal process changes that Red Hat went through in order to take Fedora and deliver Red Hat Enterprise Linux 8. Little of this was new to me, naturally, having lived through it (with scars to show), but it was interesting to hear how the non-Red Hat attendees perceived it.

For the last event of the first day, we had a round of Slideshow Karaoke. This was a lot of fun and quite hilarious. It was a great way to round out the start of Flock.

Day Two – Friday

The second day of Flock opened with Denise Dumas, VP of Platform Engineering at Red Hat, giving a talk about “Fedora, Red Hat and IBM”. Specifically: How will the IBM acquisition affect Fedora? Short answer: it won’t. Best line of this talk: “If you want to go fast, go alone. If you want to go far, go together.”

After that came a lively panel discussion where Denise Dumas, Aleksandra Fedorova, Brendan Conoboy and Paul Frields talked to us about the relationship between Fedora and Red Hat Enterprise Linux 8, particularly where it diverged and a little of what is coming next for that relationship.

After lunch, I attended Pierre-Yves Chibon’s talk on Gating rawhide packages. Now that it’s live and in production, there was a very high interest; many were unable to find seats and stood around the walls. There was a short lecture describing the plans to get more tests and support for multi-package gating.

Next up, I attended Alexander Bokovoy’s talk on the “State of Authentication and Identity Management in Fedora”. Alexander discussed a lot of deep technical topics, including the removal of old, insecure protocols from Fedora and the status of authentication tools like SSSD and Kerberos in the distribution.

I went to yet another of Brendan Conoboy’s talks after that, this time on “What Stability Means and How to Do Better”. The focus on this talk was that “stability” means many different things to different people. Engineers tend to focus on stabliity meaning “it doesn’t crash”, but stability can mean everything from that through “backwards-compatibility of ABIs” and all the way through to “the user experience remains consistent”. This was quite informative and I think the attendees got a lot out of it. I did.

The next talk I attended was given by Niharika Shrivastava (my aforementioned Outreachy intern) and Manas Mangaonkar on “Students in developing nations and FOSS contribution limitation”. It provided a very interesting (and, at times, disturbing) perspective on how open-source contribution is neglected and even dismissed by many Indian universities and businesses. Clearly we (the FOSS community) need to expend more resources in this area.

Friday concluded with a river cruise along the Danube, which was a nice chance to unwind and hobnob with my fellow Fedorans. I got a few pictures, chatted with some folks I hadn’t seen in a long time as well as got introduced to several new faces (always wonderful to see!).

Day Three – Saturday

By the time Saturday rolled around, jet-lag was catching up to me, as well as some very long days, so I was somewhat tired and zombie-like. I’ve been told that I participated in a panel during the “Fedora Summer Coding 2019 Project Showcase and Meetup”, but I have few memories of the event. Kidding aside, it was a wonderful experience. Each of the interns from Google Summer of Code, Google Code-In and Outreachy gave a short presentation of the work they had been doing over the summer. I was extremely proud of my intern, Niharika, who gave an excellent overview of the translation work that she’s been working on for the last two months. The other projects were exciting as well and I look forward to their completion. The panel went quite well and we got some excellent questions. All in all, this year was one of my most positive experiences with internships and I hope very much that it’s setting the stage for the future as well.

After lunch came the headsman… I mean the “Modularity & Packager Experience Birds-Of-A-Feather” session. We started the session by spending fifteen minutes to list all of our gripes with the current state of Modularity packaging. These were captured on a poster board and later by Langdon White into a Google Doc. We then voted, unconference-style, on the issues that people most wanted to see addressed. The top four subjects were selected and we allocated a quarter of the remaining session time for each of them.

I personally missed the first topic as I ended up in a sidebar discussing internationalization plans with one of our Fedora Translation Team members, who had been following the work that Niharika and I have been doing in that space.

The other topics that were discussed at length involved how to perform offline local module builds, creating documentation and tooling to enable non-MBS services like COPR and OBS to create modules and how to deal with rolling defaults and rolling dependencies. Langdon White took additional notes and is, I believe, planning to present a report on it as well, which I will link to once it becomes available.

This was unquestionably the most useful session at Flock for me. We were able, in a fairly short period of time, to enumerate the problems before us and work together to come up with some concrete steps for addressing them. If this had been the only session I attended at Flock, it would still have been worth the price of travel.

Day Four – Sunday

Due to a slight SNAFU scheduling my return flight,  I had to leave at 11:00 in the morning to catch my plane. I did, however, spend a while in the morning playing around with some ideas on how to offer simple module creation to OBS and COPR. I think I made some decent progress, which I’ll follow up on in a future blog post.

Conclusion

As always, Flock to Fedora was an excellent conference. As every year, I find that it revitalizes me and inspires me to get back to work and make reality out of the ideas we brainstormed there. It’s going to be an interesting year!

Sausage Factory: Advanced module building in Fedora

First off, let me be very clear up-front: normally, I write my blog articles to be approachable by readers of varying levels of technical background (or none at all). This will not be one of those. This will be a deep dive into the very bowels of the sausage factory.

This blog post is a continuation of the Introduction to building modules in Fedora entry I wrote last month. It will assume a familiarity with all of the concepts discussed there.

Analyzing a more complicated module

Last time, we picked an extremely simple package to create. The talloc module needed to contain only a single RPM, since all the dependencies necessary both at build-time and runtime were available from the existing base-runtime, shared-userspace and common-build-dependencies packages.

This time, we will pick a slightly more complicated example that will require exploring some of the concepts around building with package dependencies. For this purpose, I am selecting the sscg package (one of my own and discussed previously on this blog in the article “Self-Signed SSL/TLS Certificates: Why they are terrible and a better alternative“).

We will start by analyzing sscg‘s dependencies. As you probably recall from the earlier post, we can do this with dnf repoquery:

dnf repoquery --requires sscg.x86_64 --resolve

Which returns with:

glibc-0:2.25-6.fc26.i686
glibc-0:2.25-6.fc26.x86_64
libpath_utils-0:0.2.1-30.fc26.x86_64
libtalloc-0:2.1.9-1.fc26.x86_64
openssl-libs-1:1.1.0f-4.fc26.x86_64
popt-0:1.16-8.fc26.x86_64

and then also get the build-time dependencies with:

dnf repoquery --requires --enablerepo=fedora-source --enablerepo=updates-source sscg.src --resolve

Which returns with:/home/sgallagh/modulebuild/builds/module-talloc-master-20170526153440/results/module-build-macros-mock-stderr.log

gcc-0:7.1.1-3.fc26.i686
gcc-0:7.1.1-3.fc26.x86_64
libpath_utils-devel-0:0.2.1-30.fc26.i686
libpath_utils-devel-0:0.2.1-30.fc26.x86_64
libtalloc-devel-0:2.1.9-1.fc26.i686
libtalloc-devel-0:2.1.9-1.fc26.x86_64
openssl-devel-1:1.1.0f-4.fc26.i686
openssl-devel-1:1.1.0f-4.fc26.x86_64
popt-devel-0:1.16-8.fc26.i686
popt-devel-0:1.16-8.fc26.x86_64

So let’s start by narrowing down the set of dependencies we already have by comparing them to the three foundational modules. The base-runtime module provides gcc, glibcopenssl-libs, openssl-devel, popt, and popt-devel . The shared-userspace module provides libpath_utils and libpath_utils-devel as well, which leaves us with only libtalloc as an unsatisfied dependency. Wow, what a convenient and totally unexpected outcome when I chose this package at random! Kidding aside, in most real-world situations this would be the point at which we would start recursively going through the leftover packages and seeing what their dependencies are. In this particular case, we know from the previous article that libtalloc is self-contained, so we will only need to include sscg and libtalloc in the module.

As with the libtalloc example, we need to now clone the dist-git repositories of both packages and determine the git hash that we intend to use for building the sscg module. See the previous blog post for details on this.

Creating a module with internal dependencies

Now let’s set up our git repository for our new module:

mkdir sscg && cd sscg
touch sscg.yaml
git init
git add sscg.yaml
git commit -m "Initial setup of the module"

And then we’ll edit the sscg.yaml the same way we did for the libtalloc module:

document: modulemd
version: 1
data:
  summary: Simple SSL certificate generator
  description: A utility to aid in the creation of more secure "self-signed" certificates. The certificates created by this tool are generated in a way so as to create a CA certificate that can be safely imported into a client machine to trust the service certificate without needing to set up a full PKI environment and without exposing the machine to a risk of false signatures from the service certificate.
  stream: ''
  version: 0
  license:
    module:
    - GPLv3+
  references:
    community: https://github.com/sgallagher/sscg
    documentation: https://github.com/sgallagher/sscg/blob/master/README.md
    tracker: https://github.com/sgallagher/sscg/issues
  dependencies:
    buildrequires:
      base-runtime: f26
      shared-userspace: f26
      common-build-dependencies: f26
      perl: f26
    requires:
      base-runtime: f26
      shared-userspace: f26
  api:
    rpms:
    - sscg
  profiles:
    default:
    - sscg
  components:
    rpms:
      libtalloc:
        rationale: Provides a hierarchical memory allocator with destructors. Dependency of sscg.
        ref: f284a27d9aad2c16ba357aaebfd127e4f47e3eff
        buildorder: 0
      sscg:
        rationale: Purpose of this module. Provides certificate generation helpers.
        ref: d09681020cf3fd33caea33fef5a8139ec5515f7b
        buildorder: 1

There are several changes from the libtalloc example in this modulemd, so let’s go through them one at a time.

The first you may notice is the addition of perl in the buildrequires: dependencies. This is actually a workaround at the moment for a bug in the module-build-service where not all of the runtime requirements of the modules specified as buildrequires: are properly installed into the buildroot. It’s unfortunate, but it should be fixed in the near future and I will try to remember to update this blog post when it happens.

You may also notice that the api section only includes sscg and not the packages from the libtalloc component. This is intentional. For the purposes of this module, libtalloc satisfies some dependencies for sscg, but as the module owner I do not want to treat libtalloc as a feature of this module (and by extension, support its use for anything other than the portions of the library used by sscg). It remains possible for consumers of the module to link against it and use it for their own purposes, but they are doing so without any guarantee that the interfaces will remain stable or even be present on the next release of the module.

Next on the list is the addition of the entirely-new profiles section. Profiles are a way to indicate to the package manager (DNF) that some packages from this module should automatically be installed when the module is activated if a certain system profile is enabled. The ‘default’ profile will take effect if no other profile is explicitly set. So in this case, the expectation if a user did dnf module install sscg would be to activate this module and install the sscg package (along with its runtime dependencies) immediately.

Lastly, under the RPM components there is a new option, buildorder. This is used to inform the MBS that some packages are dependent upon others in the module when building. In our case, we need libtalloc to be built and added into the buildroot before we can build sscg or else the build will fail and we will be sad. By adding buildorder, we tell the MBS: it’s okay to build any of the packages with the same buildorder value concurrently, but we should not attempt to build anything with a higher buildorder value until all of those lower have completed. Once all packages in a buildorder level are complete, the MBS will generate a private buildroot repository for the next buildorder to use which includes these packages. If the buildorder value is left out of the modulemd file, it is treated as being buildorder: 0.

At this point, you should be able to go ahead and commit this modulemd file to git and run mbs-build local successfully. Enjoy!