I ride a Specialized Vado SL 4.0. It has a TCU (Turbo Connect Unit) that broadcasts telemetry over Bluetooth Low Energy — battery level, speed, motor power, cadence, temperature, the works. Specialized has their own Mission Control app for this, but I wanted the data in Home Assistant.
So I built a custom integration.
The protocol
Specialized’s BLE protocol isn’t documented anywhere official. Fortunately, Sepp62 had already reverse-engineered it for an ESP32 project. The protocol is called “TURBOHMI2017”, and the UUID base literally encodes this string backwards in ASCII, which I find weirdly charming. Messages are dead simple: one byte for sender, one byte for channel, and 1-4 bytes of little-endian data.
I ported the protocol to Python as specialized-turbo, an async library built on bleak. It handles scanning for nearby bikes, pairing (the bike shows a 6-digit PIN on its display), and streaming telemetry. There’s also a CLI if you want to poke around without writing code:
1$ specialized-turbo scan
2Found: My Vado SL (AA:BB:CC:DD:EE:FF)34$ specialized-turbo telemetry --address AA:BB:CC:DD:EE:FF
5battery_charge_pct: 736speed_kmh: 24.5
7rider_power_w: 1858motor_power_w: 1209cadence_rpm: 82
The Home Assistant integration
ha-specialized-turbo uses Home Assistant’s Bluetooth auto-discovery. Turn on your bike near your HA instance and it shows up. Enter the pairing PIN from the TCU screen and you’re done.
Motor: speed, rider power, motor power, cadence, odometer, motor temperature
Settings: assist level (Off/Eco/Trail/Turbo), plus the tuning percentages for each level
All data is pushed locally over BLE. No cloud, no polling after the initial connection.
Installation
Install via HACS as a custom repository, or copy the custom_components/specialized_turbo folder into your HA config directory. You’ll need a Bluetooth adapter on your HA host and a Specialized Turbo bike from 2017 or later (Vado, Levo, Creo, or other models with a TCU).
What’s next
The library supports write commands too, like changing the assist level from HA. I haven’t wired that up to the integration yet, but it’s on the list.
Back in January 2020, Google open sourced Wombat Dressing Room, an npm proxy that solved a real problem: how do you maintain two-factor authentication for npm packages while still using automation? For teams managing dozens or hundreds of packages, manually entering 2FA codes for every publish wasn’t just inconvenient, it was a dealbreaker. Fast forward to 2025, and npm has finally introduced native support for trusted publishing using OIDC. Which begs the question: is Wombat Dressing Room still necessary?
The problem Wombat Dressing Room solved
npm has supported two-factor authentication for a long time. But 2FA presents a challenge for automation. You need “something you know” (a password) and “something you have” (a code from an authenticator app). That second factor is difficult to automate, leading many teams to simply disable 2FA in their CI/CD pipelines.
Wombat Dressing Room took a different approach. Rather than bypassing 2FA, it created a shared proxy server that managed 2FA centrally and provided three security features:
Per-package tokens
The proxy could generate authentication tokens tied to specific GitHub repositories. If a token leaked, an attacker could only compromise the single package associated with that token—not your entire npm account.
Limited lifetime tokens
Tokens could be configured with a 24 hour lifespan. Even if compromised, the window of vulnerability was limited.
GitHub Releases as 2FA
This was the clever bit. Packages could only be published if a corresponding GitHub release with a matching tag existed. This introduced a true “second factor”, proving access to both the proxy and the GitHub repository.
Enter npm trusted publishing
In 2025, npm rolled out trusted publishing, implementing the OpenSSF trusted publishers standard. It uses OpenID Connect (OIDC) to create a trust relationship between npm and your CI/CD provider. When configured, npm accepts publishes from authorized workflows without requiring long-lived tokens.
The security benefits are clear. Traditional npm tokens can be accidentally exposed in CI logs, require manual rotation, and provide persistent access until revoked. Trusted publishing eliminates all of this by using short-lived tokens that are specific to your workflow and can’t be extracted or reused.
Setting it up
The setup process is straightforward. First, you configure a trusted publisher on npmjs.com by specifying your GitHub organization, repository, and workflow filename. Then you update your workflow to request the necessary OIDC permissions:
That’s it. No more NPM_TOKEN secrets, no more token rotation, no more worrying about accidentally leaking credentials. The npm CLI automatically detects the OIDC environment and handles authentication.
There’s one requirement worth noting: you need npm CLI version 11.5.1 or later. But if you’re using a reasonably recent version of Node.js, you’re already covered.
Automatic provenance generation
Trusted publishing also enables automatic provenance generation. When you publish using OIDC from a public repository, npm automatically generates and publishes provenance attestations for your package. You don’t need to add the --provenance flag—it just happens.
Provenance provides cryptographic proof of where and how your package was built, allowing users to verify its authenticity. This transparency is becoming increasingly important as supply chain attacks grow more sophisticated.
Wrapping up
Trusted publishing is a sign that the npm ecosystem is maturing. By implementing the OpenSSF standard, npm joins PyPI, RubyGems, and other major package registries in offering OIDC-based publishing. This standardisation makes it easier for developers working across multiple ecosystems to apply consistent security practices.
Wombat Dressing Room served the community well for over five years, bridging the gap between security requirements and automation needs. Now that npm has addressed those needs natively, the project can retire gracefully. Good infrastructure tools often become unnecessary when the platform absorbs their functionality.
If you’re still using long-lived npm tokens in your CI/CD pipelines, trusted publishing is worth the upgrade. The setup is straightforward, the security is better, and you’ll never have to worry about token rotation again.
When you create a new .NET project and start writing code, you might find yourself using classes like System.Text.Json.JsonSerializer without ever explicitly adding a reference to System.Text.Json in your .csproj file. This isn’t magic—it’s because these Base Class Libraries (BCLs) are shipped as part of the .NET runtime itself, making them implicit references that are automatically available to your application.
But this convenience comes with a security implication that’s easy to miss: when a vulnerability is discovered in one of these implicit dependencies, patching it isn’t as straightforward as updating a NuGet package reference.
The invisible dependency problem
Let’s start with a real-world example. In October 2024, Microsoft disclosed CVE-2024-43485, a high-severity denial of service vulnerability in System.Text.Json. The vulnerability affects applications that deserialize input to a model with a [JsonExtensionData] property.
Here’s the catch: if you look at your .csproj file, you probably won’t see any explicit reference to System.Text.Json. Yet your application might still be vulnerable. This is because System.Text.Json is part of the .NET runtime’s Base Class Library, making it an implicit dependency that’s automatically available to all .NET applications.
1<ProjectSdk="Microsoft.NET.Sdk">2<PropertyGroup>3<TargetFramework>net8.0</TargetFramework>4</PropertyGroup>56<!-- No explicit System.Text.Json reference, but you can still use it -->7</Project>
Two paths to patching
When facing a vulnerability in an implicit dependency like this, you have two main options to ensure your application is secure:
Option 1: Add an explicit reference
The most obvious solution is to add an explicit package reference to the vulnerable library:
This approach leverages NuGet’s “direct dependency wins” rule. When your application has both an implicit dependency (from the runtime) and an explicit dependency (from your .csproj), the explicit one takes precedence, but only if the explicit version is equal to or higher than the BCL version in the runtime.
Here’s the catch: if you specify a lower version than what’s bundled with the runtime, .NET will still use the runtime version. For example, if your runtime includes System.Text.Json version 8.0.4 and you explicitly reference version 8.0.2, the runtime version (8.0.4) will be used. This means you can’t accidentally downgrade to a vulnerable version, but it also means your explicit reference must be at least as recent as the runtime version to take effect.
While this works, it’s not a scalable long-term solution. The .NET runtime includes hundreds of libraries, and making all implicit references explicit would clutter your project files and create a maintenance burden.
Option 2: Set a minimum SDK version with global.json
A better approach is to use global.json to specify a minimum .NET SDK version that includes the patched libraries:
This ensures that anyone building your project—whether locally, in CI/CD, or when deploying—uses at least the specified SDK version, which includes the security patches for all Base Class Libraries.
Self-contained deployments
This matters even more if you’re shipping self-contained applications. When you publish a self-contained app, the .NET runtime is bundled with your application, including all the Base Class Libraries. If you build your self-contained app with an older SDK that contains vulnerable libraries, those vulnerabilities get shipped with your application.
For self-contained deployments, building with an up-to-date SDK isn’t just about development convenience—it’s a security requirement. A global.json file becomes necessary for maintaining a security baseline across your entire deployment pipeline.
The current developer experience pain
Currently, if you don’t have the correct SDK version installed and try to build a project with a global.json requirement, you’ll encounter an often inscrutable error message.
The good news is that the .NET team is aware of the problem. There’s an ongoing effort tracked in dotnet/cli-lab#390 to create a .NET bootstrapper that will improve SDK acquisition and provide better error messages. This work aims to make .NET 10 much more user-friendly when dealing with SDK version mismatches.
What about other runtimes?
This pattern of implicit runtime dependencies isn’t unique to .NET. Java developers face a remarkably similar challenge with the Java standard library. When a vulnerability is discovered in a core Java package like java.util or java.security, the remediation path typically involves updating the entire Java Runtime Environment (JRE) or Java Development Kit (JDK).
For example, when CVE-2022-21449 was discovered in Java’s elliptic curve signature verification, applications using ECDSA signatures were vulnerable regardless of whether they explicitly imported the affected classes. The fix required updating to a patched version of the JRE.
However, Java’s ecosystem has traditionally been more rigid in this regard. While .NET allows you to override BCL versions with explicit NuGet references (leveraging the “direct dependency wins” rule), Java applications are typically bound to whatever version of the standard library comes with their runtime. This makes the global.json approach even more critical in the .NET world, as it’s often your most practical option.
The key difference is that .NET’s package management system provides more flexibility—you can sometimes work around runtime library issues with explicit package references, whereas Java applications usually have no choice but to update their entire runtime environment.
Wrapping up
.NET’s Base Class Libraries make development faster, but they also create a security challenge that’s different from normal NuGet dependencies. BCL vulnerabilities don’t show up in your project file, so they need a different fix.
Using global.json as part of your security strategy helps keep your applications protected against vulnerabilities in both explicit and implicit dependencies. The improvements coming in .NET 10 should make this easier to manage.
Have you ever wondered what lies beneath the surface of an npm package? At its heart, it’s nothing more than a gzipped tarball. Working in software development, source code and binary artifacts are nearly always shipped as .tar.gz or .tgz files. And gzip compression is supported by every HTTP server and web browser out there. caniuse.com doesn’t even give statistics for support, it just says “supported in effectively all browsers”. But here’s the kicker: gzip is starting to show its age, making way for newer, more modern compression algorithms like Brotli and ZStandard. Now, imagine a world where npm embraces one of these new algorithms. In this blog post, I’ll dive into the realm of compression and explore the possibilities of modernising npm’s compression strategy.
What’s the competition?
The two major players in this space are Brotli and ZStandard (or zstd for short). Brotli was released by Google in 2013 and zstd was released by Facebook in 2016. They’ve since been standardised, in RFC 7932 and RFC 8478 respectively, and have seen widespread use all over the software industry. It was actually the announcement by Arch Linux that they were going to start compressing their packages with zstd by default that made think about this in the first place. Arch Linux was by no means the first project, nor is it the only one. But to find out if it makes sense for the Node ecosystem, I need to do some benchmarks. And that means breaking out tar.
Benchmarking part 1
https://xkcd.com/1168
I’m going to start with tar and see what sort of comparisons I can get by switching gzip, Brotli, and zstd. I’ll test with the npm package of npm itself as it’s a pretty popular package, averaging over 4 million downloads a week, while also being quite large at around 11MB unpacked.
1$ curl --remote-name https://registry.npmjs.org/npm/-/npm-9.7.1.tgz
2$ ls -l --human npm-9.7.1.tgz
3-rw-r--r-- 1 jamie users 2.6M Jun 16 20:30 npm-9.7.1.tgz
4$ tar --extract --gzip --file npm-9.7.1.tgz
5$ du --summarize --human --apparent-size package
611M package
gzip is already giving good results, compressing 11MB to 2.6MB for a compression ratio of around 0.24. But what can the contenders do? I’m going to stick with the default options for now:
1$ brotli --version
2brotli 1.0.9
3$ tar --use-compress-program brotli --create --file npm-9.7.1.tar.br package
4$ zstd --version
5*** Zstandard CLI (64-bit) v1.5.5, by Yann Collet ***
6$ tar --use-compress-program zstd --create --file npm-9.7.1.tar.zst package
7$ ls -l --human npm-9.7.1.tgz npm-9.7.1.tar.br npm-9.7.1.tar.zst
8-rw-r--r-- 1 jamie users 1.6M Jun 16 21:14 npm-9.7.1.tar.br
9-rw-r--r-- 1 jamie users 2.3M Jun 16 21:14 npm-9.7.1.tar.zst
10-rw-r--r-- 1 jamie users 2.6M Jun 16 20:30 npm-9.7.1.tgz
Wow! With no configuration both Brotli and zstd come out ahead of gzip, but Brotli is the clear winner here. It manages a compression ratio of 0.15 versus zstd’s 0.21. In real terms that means a saving of around 1MB. That doesn’t sound like much, but at 4 million weekly downloads, that would save 4TB of bandwidth per week.
Benchmarking part 2: Electric boogaloo
The compression ratio is only telling half of the story. Actually, it’s a third of the story, but compression speed isn’t really a concern. Compression of a package only happens once, when a package is published, but decompression happens every time you run npm install. So any time saved decompressing packages means quicker install or build steps.
To test this, I’m going to use hyperfine, a command-line benchmarking tool. Decompressing each of the packages I created earlier 100 times should give me a good idea of the relative decompression speed.
tar –use-compress-program brotli –extract –file npm-9.7.1.tar.br –overwrite
51.6 ± 3.0
47.9
57.3
1.31 ± 0.12
tar –use-compress-program zstd –extract –file npm-9.7.1.tar.zst –overwrite
39.5 ± 3.0
33.5
51.8
1.00
tar –use-compress-program gzip –extract –file npm-9.7.1.tgz –overwrite
47.0 ± 1.7
44.0
54.9
1.19 ± 0.10
This time zstd comes out in front, followed by gzip and Brotli. This makes sense, as “real-time compression” is one of the big features that is touted in zstd’s documentation. While Brotli is 31% slower compared to zstd, in real terms it’s only 12ms. And compared to gzip, it’s only 5ms slower. To put that into context, you’d need a more than 1Gbps connection to make up for the 5ms loss it has in decompression compared with the 1MB it saves in package size.
Benchmarking part 3: This time it’s serious
Up until now I’ve just been looking at Brotli and zstd’s default settings, but both have a lot of knobs and dials that you can adjust to change the compression ratio and compression or decompression speed. Thankfully, the industry standard lzbench has got me covered. It can run through all of the different quality levels for each compressor, and spit out a nice table with all the data at the end.
But before I dive in, there are a few caveats I should point out. The first is that lzbench isn’t able to compress an entire directory like tar , so I opted to use lib/npm.js for this test. The second is that lzbench doesn’t include the gzip tool. Instead it uses zlib, the underlying gzip library. The last is that the versions of each compressor aren’t quite current. The latest version of zstd is 1.5.5, released on April 4th 2023, whereas lzbench uses version 1.4.5, released on May 22nd 2020. The latest version of Brotli is 1.0.9, released on August 27th 2020, whereas lzbench uses a version released on October 1st 2019.
This pretty much confirms what I’ve shown up to now. zstd is able to provide faster decompression speed than either gzip or Brotli, and slightly edge out gzip in compression ratio. Brotli, on the other hand, has comparable decompression speeds and compression ratio with gzip at lower quality levels, but at levels 10 and 11 it’s able to edge out both gzip and zstd’s compression ratio.
Everything is derivative
Now that I’ve finished with benchmarking, I need to step back and look at my original idea of replacing gzip as npm’s compression standard. As it turns out, Evan Hahn had a similar idea in 2022 and proposed an npm RFC. He proposed using Zopfli, a backwards-compatible gzip compression library, and Brotli’s older (and cooler 😎) sibling. Zopfli is able to produce smaller artifacts with the trade-off of a much slower compression speed. In theory an easy win for the npm ecosystem. And if you watch the RFC meeting recording or read the meeting notes, everyone seems hugely in favour of the proposal. However, the one big roadblock that prevents this RFC from being immediately accepted, and ultimately results in it being abandoned, is the lack of a native JavaScript implementation.
Learning from this earlier RFC and my results from benchmarking Brotli and zstd, what would it take to build a strong RFC of my own?
Putting it all together
Both Brotli and zstd’s reference implementations are written in C. And while there are a lot of ports on the npm registry using Emscripten or WASM, Brotli has an implementation in Node.js’s zlib module, and has done since Node.js 10.16.0, released in May 2019. I opened an issue in Node.js’s GitHub repo to add support for zstd, but it’ll take a long time to make its way into an LTS release, nevermind the rest of npm’s dependency chain. I was already leaning towards Brotli, but this just seals the deal.
Deciding on an algorithm is one thing, but implementing it is another. npm’s current support for gzip compression ultimately comes from Node.js itself. But the dependency chain between npm and Node.js is long and slightly different depending on if you’re packing or unpacking a package.
The dependency chain for packing, as in npm pack or npm publish, is:
That’s quite a few packages that need to be updated, but thankfully the first steps have already been taken. Support for Brotli was added to minizlib 1.3.0 back in September 2019. I built on top of that and contributed Brotil support to tar. That is now available in version 6.2.0. It may take a while, but I can see a clear path forward.
The final issue is backwards compatibility. This wasn’t a concern with Evan Hahn’s RFC, as Zopfli generates backwards-compatible gzip files. However, Brotli is an entirely new compression format, so I’ll need to propose a very careful adoption plan. The process I can see is:
Support for packing and unpacking is added in a minor release of the current version of npm
Unpacking using Brotli is handled transparently
Packing using Brotli is disabled by default and only enabled if one of the following are true:
The engines field in package.json is set to a version of npm that supports Brotli
The engines field in package.json is set to a version of node that bundles a version of npm that supports Brotli
Brotli support is explicitly enabled in .npmrc
Packing using Brotli is enabled by default in the next major release of npm after the LTS version of Node.js that bundles it goes out of support
Let’s say that Node.js 22 comes with npm 10, which has Brotli support. Node.js 22 will stop getting LTS updates in April 2027. Then, the next major version of npm after that date should enable Brotli packing by default.
I admit that this is an incredibly long transition period. However, it will guarantee that if you’re using a version of Node.js that is still being supported, there will be no visible impact to you. And it still allows early adopters to opt-in to Brotli support. But if anyone has other ideas about how to do this transition, I am open to suggestions.
What’s next?
As I wrap up my exploration into npm compression, I must admit that my journey has only just begun. To push the boundaries further, there are a lot more steps. First and foremost, I need to do some more extensive benchmarking with the top 250 most downloaded npm packages, instead of focusing on a single package. Once that’s complete, I need to draft an npm RFC and seek feedback from the wider community. If you’re interested in helping out, or just want to see how it’s going, you can follow me on Mastodon at @[email protected], or on Bluesky at @jamiemagee.bsky.social.
When it comes to Linux containers, there are plenty of tools out there that can scan container images, generate Software Bill of Materials (SBOM), or list vulnerabilities. However, Windows container images are more like the forgotten stepchild in the container ecosystem. And that means we’re forgetting the countless developers using Windows containers, too.
Instead of allowing this gap to widen further, container tool authors—especially SBOM tools and vulnerability scanners—need to add support for Windows container images.
In my presentation at Container Plumbing Days 2023 I showed how to extract version information from Windows containers images that can be used to generate SBOMs, as well as how to integrate with the Microsoft Security Updates API which can provide detailed vulnerability information.