Self hosting Renovate on GitHub

|

Early last year I wrote a few blog posts about hosting Renovate Bot in Azure DevOps to automatically create Pull Requests when dependencies update, how to configure it to group updates and how to share configurations. I have recently had a look at using Renovate Bot on GitHub as well as I have some projects where Dependabot simply does not work very well out of the box and would require me to use my own GitHub Action and using their CLI tool. While this would likely work I would still have to script all the PR creation etc. which is not exactly something I would want to deal with, especially when I am really happy with how Renovate Bot works for me in Azure DevOps.

To run Renovate Bot in GitHub you have a few options.

  1. You can add and run Mend Renovate GitHub App
  2. You can self host using GitHub Actions

The first option is great if you trust a third party scanning your repositories and creating a dashboard of all your dependencies and more, it doesn’t seem to cost anything to have the default Renovate features but mend offer more than just dependency updates.

The second option self hosting is great too and you are in control. You chose whether to run on GitHub Hosted runners or your own agents, up to you. Under the hood it works in a similar fashion as described in my first blog post, with a Docker container spinning up and running Renovate. The main difference here is that the Renovate Bot team wrapped it up nicely in a GitHub Action ready to be used.

Authentication in GitHub

Using secrets.GITHUB_TOKEN is really nice in GitHub Actions. However, this token has a lot of limitations and is scoped to only work in the repository you store your GitHub Action for Renovate in. Meaning that this token will not be able to create Pull Requests, Issues and comment in other repositories than its own. If you plan to run Renovate for only one repository this may work. However, if you want to run it for multiple repositories on your user or organization then you have to use some of GitHub’s other options.

Personal Access Tokens

You have probably already tried Personal Access Tokens (PAT) for some features in GitHub. If not they are simply a token tied to a specific user with a limited set of scopes it has access to and with an expiration date or optionally without any expiration.

The downside is that it is personal it is tied to you. If you leave an organization, this PAT will not work anymore. If you opt to use a PAT then make sure to read which scopes you need in their documentation.

If you plan to use this PAT for repositories in an organization, which is only available for Classic tokens, remember to click the Configure SSO button next to your generated token and allow it for that given organization. This looks something like this:

gh-pat-sso

GitHub App

Instead of using a PAT you can create your own GitHub App which you install to your organization or selectively repositories you want it to run on. When running Renovate in GitHub Actions, you will create a token for this App instead and the App will have the permissions to operate on the repositories. Also, instead of then creating Pull Requests on your behalf like with a PAT, it will show up as the Application instead. Also if you happen to leave the organization, Renovate will keep working.

I opted for this solution, even though it requires a little bit of setup. However, it is not that hard to do.

First you need to create your GitHub App. I did this on my organization going to https://github.com/organizations/MYORG/settings/apps/new

Here you just need to give it a name, such as MYORG Renovate and fill in any URL. I opted to link to the repository where renovate is going to live in my organization. You can disable Webhooks.

Then the important step is to provide the GitHub App the correct permissions for Renovate to do its job. Make sure to read which ones to check off in the Renovate docs. As of writing you will need:

Permission Scope
Checks Read + Write
Commit statuses Read + Write
Contents Read + Write
Dependabot alerts Read
Issues Read + Write
Pull requests Read + Write
Workflows Read + Write
Administration Read
Members Read

You can always add more permissions later, but they would need to be authorized per repo or org you added the App for, so better get this right to begin with.

If you want to restore private packages from your organization using the GitHub Apps token, I recommend adding Read scope to Organization Private Registries.

Once you have created it, you will at the top of the General tab for the Application, see an App ID. This ID, you will need later for authentication.

There should be a prompt at the top of the page that you need to generate a Private key. Save this file as we need this later too for authentication. The contents of the file should start with something like -----BEGIN RSA PRIVATE KEY----- you need to include both this and the ending like when storing the secret.

I opted to store these two pieces of information in GitHub Actions Secrets, so I can refer to them as secrets.RENOVATE_APP_ID and secrets.RENOVATE_PRIVATE_KEY

Last but not least, you need to install the application to your organization and figure out whether you want to give it access to all repositories or only select. If you opt for the latter option with only select repositories, you can always add more repositories or change your mind later on the installation page of the App.

Running Renovate in GitHub Actions

For running Renovate in my organization, I opted to create a repository in the organization called renovate-bot which is where I run the GitHub Action and have the default configuration stored.

For the action create a new workflow in the folder .github/workflows I called mine renovate.yml.

If you are using a PAT you only need two steps. Checkout and the renovate action. This would look something like:

- name: Checkout
  uses: actions/checkout@v5

- name: Self-hosted Renovate
  uses: renovatebot/github-action@v43
  with:
    configurationFile: renovate-config.js
    token: '${{ secrets.RENOVATE_TOKEN }}'

However, if you are using the GitHub App authentication approach you need a little bit more config. First you need to authenticate the GitHub App

- name: Get token
  id: get_token
  uses: actions/create-github-app-token@v2
  with:
    private-key: ${{ secrets.RENOVATE_PRIVATE_KEY }}
    app-id: ${{ secrets.RENOVATE_APP_ID }}
    owner: ${{ github.repository_owner }}
    repositories: |
      repo1
      repo2

So here you use the App ID and Private Key you generated for the GitHub App earlier.

For the repositories, you need to specify which repositories you want to authenticate the token for. You can also remove this and authenticate all the repos the GitHub App has access to. Just keep the owner argument to authenticate all repositories for that owner.

Then instead of the PAT you authenticate using the output from this task:

- name: Self-hosted Renovate
  uses: renovatebot/github-action@v43
  with:
    configurationFile: renovate-config.js
    token: '${{ steps.get_token.outputs.token }}'

Configuration

I store the renovate-config.js in the root of the repository and I have something like this in there:

module.exports = {
  branchPrefix: 'renovate/',
  username: 'renovate-release',
  gitAuthor: 'Renovate Bot <[email protected]>',
  onboarding: false,
  platform: 'github',
  repositories: [
    'owner/repo1',
    'owner/repo2'
  ]
};

This works great for most repositories that only require fetching package updates from public sources.

Authenticating Private GitHub Packages

I spent a bunch of time figuring out how to get GitHub Packages to work for one of the projects I work on. It uses maven packages stored in GitHub Packages on a private repo.

A couple of things I struggled with which I wish I had know beforehand.

  1. The renovatebot/github-action does not forward all environment variables to the docker container that runs underneath the hood. There is a regex that only allows some environment variables. So prefix your environment variables for Renovate with RENOVATE_ for use in your config
  2. The GitHub Token from the system secrets.GITHUB_TOKEN even when specifying packages:read permission, does not get access to other than the repo we are running in currently packages
  3. Using RENOVATE_X_GITHUB_HOST_RULES does not work as it uses secrets.GITHUB_TOKEN under the hood and by design is broken, do not chase this option

With that in mind. Authenticating GitHub Packages, such as maven, npm and NuGet is fairly straight forward. You will need to add a hostRule to specify how to authenticate. Since we already have either a PAT or GitHub App Token (given you added Read permission for Organization Private Registries) with read permissions to packages, then we can just use that token in your rule. So in your renovate-config.js you can add:

hostRules: [
  {
    hostType: 'maven',
    matchHost: 'maven.pkg.github.com',
    username: 'x-access-token',
    password: process.env.RENOVATE_TOKEN,
  },
],

This should be similar for other hostTypes too when used with GitHub Packages.

If you want to provide your own token or you are authenticating some other source. Just make sure you prefix the environment variable with RENOVATE_. So something like this;

- name: Self-hosted Renovate
  uses: renovatebot/github-action@v43
  with:
    configurationFile: renovate-config.js
    token: '${{ steps.get_token.outputs.token }}'
  env:
    RENOVATE_MY_TOKEN: '${{ secrets.MY_TOKEN }}'

Then in the config you can use it as process.env.RENOVATE_MY_TOKEN.

Troubleshooting

When troubleshooting Renovate, you can run it dry, so it won’t open any Pull Requests by adding dryRun: 'full' in the config:

module.exports = {
  dryRun: 'full',
  ...

Then you can also increase the verbosity of the log by adding the environment variable LOG_LEVEL with a supported log level. I.e.:

- name: Self-hosted Renovate
  uses: renovatebot/github-action@v43
  with:
    configurationFile: renovate-config.js
    token: '${{ steps.get_token.outputs.token }}'
  env:
    LOG_LEVEL: debug

This should spit out much more information on what Renovate is doing and you can attempt to deduct what went wrong.

Eventually you should see pull requests flowing in on your repos looking something like this:

pr-example

Note: you will see warnings on your pull requests like in the example above if something goes wrong when Renovate is running. This is your cue to troubleshoot these.

Otherwise the stuff in my previous renovate posts still applies and can be used on GitHub as well.

Microsoft Testing Platform is cool!

|

I love xUnit and I use it for most of my testing. With xUnit v3 the possibility of running the tests was added without the need of using any other external tools like console runners, dotnet test or VSTest. So a xUnit test suite is just a executable itself. Not long after xUnit v3 were released I discovered Microsoft Testing Platform, which achieves something very similar. According to their own docs it “[..]is a lightweight and portable alternative to VSTest for running tests in all contexts”.

The cool thing about Microsoft Test Platform is that it provides this for all kinds of test libraries such as xUnit, NUnit, Expecto, MSTest and TUnit. So the Microsoft Testing Platform will work for you regardless of unit testing library you use, and regardless of the library supporting running standalone or not, since Microsoft Testing Platform will provide the runner as well.

I still use Microsoft Testing Platform with xUnit since they coorporate very well, and using them together simplifies a few things which required additional setup and libraries. So recently, I’ve been converting a few test suites to leverage Microsoft Testing Platform so let me share a few cool things I like about it.

So if you already have a Unit Test suite you can opt into the Microsoft Testing Platform bits by adding the following to your csproj file:

<UseMicrosoftTestingPlatformRunner>true</UseMicrosoftTestingPlatformRunner>

This way you can run the tests with dotnet run, now xUnit.v3 already has this built in, but opting into the Testing Platform will then use the Testing Platform runner instead. If you still want to support running with dotnet test you can also add:

<TestingPlatformDotnetTestSupport>true</TestingPlatformDotnetTestSupport>

You can also at this point remove references to packages such as, because the Microsoft Testing Platform also handles integration with your IDE:

<PackageReference Include="xunit.runner.visualstudio">

Report formats

One cool thing about using Microsoft Testing Platform with xUnit is the wide variety of output types you get for reports. For instance some other tools I use such as SonarCloud, only supports xUnit, VSTest of NUnit report types. While some other tools I use require a CTRF type report. This is no issue and can be easily configured when executing the tests with commandline arguments. To export a TRX (VSTest) and CTRF report I do:

dotnet run --project UnitTest/UnitTest.csproj -- \
  --report-xunit-trx --report-xunit-trx-filename UnitTestReport.xml \
  --report-ctrf --report-ctrf-filename UnitTestReport.ctrf.json

This gives me both formats which I then can use for whatever other tools I use. Great!

When using NUnit you can also specify its reports with:

--report-nunit --report-nunit-filename UnitTestReport.xml

And for xUnit reports you can do:

--report-xunit --report-xunit-filename UnitTestReport.xml

So you have high flexibility of exporting whatever you need.

Code coverage

Gathering code coverage with Microsoft Testing Platform is also made super easy, previously I was relying on another package called coverlet.collector, however this package is designed for VSTest and does not work with Microsoft Testing Platform. Instead you can use their own extension Microsoft.Testing.Extensions.CodeCoverage, it can export code coverage reports in a VS binary format, XML and cobertura.

<PackageReference Include="Microsoft.Testing.Extensions.CodeCoverage">

So in a very similar fashion like the various types of test report formats, adding code coverage after having added the NuGet package is done in a very similar fashion. So piecing everything together:

dotnet run --project UnitTest/UnitTest.csproj -- \
  --report-xunit-trx --report-xunit-trx-filename UnitTestReport.xml \
  --report-ctrf --report-ctrf-filename UnitTestReport.ctrf.json \
  --coverage --coverage-output CoverageReport.coverage --coverage-output-format cobertura

Super convenient. To generate a report locally you can still use the awesome .NET tool dotnet-reportgenerator-globaltool which can generate very nice HTML and markdown reports among many other exports.

Reporting test summary with GitHub Actions

So as I mentioned previously being able to export to various formats is a very powerful feature. One thing I use the CTRF reports for is to post Unit Test summaries on GitHub Actions runs. This summary looks something like this:

Screenshot of unit test summary in a GitHub actions run

This is powered by the ctrf-io/github-test-reporter action configured like this:

- name: Publish Test Report
  uses: ctrf-io/github-test-reporter@073c73100796cafcbfdc4722c7fa11c29730439e #v1.0.18
  with:
    report-path: ${{ github.workspace }}/ctrf/*.ctrf.json
    summary-report: true
    github-report: true
    pull-request: true
    update-comment: true
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Running builds with Cake

I use Cake Build in a lot of my build pipelines, since I can run all the steps locally and debug without need of committing and triggering stuff in a CI pipeline.

Currently there is no Settings class that know of all the combinations of what Microsoft Testing Platform can receive, but you can still use dotnet run to start your tests. For this I use a code block looking something like this:

var projectName = project.GetFilenameWithoutExtension();
var runSettings = new DotNetRunSettings
{
    NoBuild = true,
    Configuration = context.BuildConfiguration,
    Verbosity = context.VerbosityDotNet,
    ArgumentCustomization = args => args
        .Append("-- ")
        .Append($"--report-xunit-trx --report-xunit-trx-filename {projectName}.trx")
        .Append($"--report-ctrf --report-ctrf-filename {projectName}.ctrf.json")
        .Append($"--coverage --coverage-output {projectName}.coverage --coverage-output-format cobertura")
};

context.DotNetRun(project.FullPath, runSettings);

There is a discussion going on requesting to add the Microsoft Testing Platform capabilities to Cake, so hopefully this will come in the future.

I really like the flexibility and capabilities that MS Testing Platform with xUnit gives me. I was able to migrate to Microsoft Testing Platform pretty easily and simplify some of my CI setup. Also the fact that you would be able to have tests using various frameworks and provide the same arguments to the runner is something that seems pretty interesting too. Since Microsoft Testing Platform would be taking care of the running and argument processing. This would allow someone to easily understand the setup but also swap it out for something else without having to mess with the arguments.

Another outcome from writing this article is getting to know of TUnit, which seems to take a slightly different approach to xUnit and other frameworks and base fully on top of Microsoft Testing Platform instead of adopting it. I might have to check this one out in the near future and see what it can do! The promise of heavily relying on Source Generators, supporting NativeAOT and fully trimmable sounds great.

Migrating Signing of NuGet packages to new sign tool

|

I maintain MvvmCross which is a part of the .NET Foundation and one of the services member projects get is signing of software using certificates issued by the .NET Foundation. This way the project does not have to manage their own signing certificates and not have to spend money on these.

So when I publish NuGet packages for MvvmCross when merging to develop or when releasing a stable release, these NuGet packages are signed. This way consumers of the packages can validate the authenticity of the package and know that it has not been tampered with.

Historically the way signing worked was using the SignClient .NET tool, which is now depcrecated as well. The .NET Foundation has also moved over to use Azure Key Vault for their certificates, so a new tooling is required for member projects to sign their packages.

With help from the .NET foundation team, I have managed to get MvvmCross packages signed again after the deprecation of the other tool. Which was suprisingly straight forward. MvvmCross uses GitHub Actions Windows runners and signing is now done by:

  1. Download the new sign tool
  2. Sign in to Azure CLI
  3. Sign packages which are in the ${{ github.workspace}}/output folder
- name: Install sign tool
  run: dotnet tool install --tool-path . sign --version 0.9.1-beta.25278.1

- name: 'Az CLI login'
  uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 #v2.3.0
  with:
    client-id: ${{ secrets.SIGN_AZURE_CLIENT_ID }}
    tenant-id: ${{ secrets.SIGN_AZURE_TENANT_ID }}
    subscription-id: ${{ secrets.SIGN_AZURE_SUBSCRIPTION_ID }}

- name: Sign NuGet packages
  shell: pwsh
  run: >
    ./sign code azure-key-vault
    **/*.nupkg
    --base-directory "${{ github.workspace }}/output"
    --publisher-name "MvvmCross"
    --description "MvvmCross is a cross platform MVVM framework"
    --description-url "https://mvvmcross.com"
    --azure-key-vault-url "${{ secrets.SIGN_AZURE_VAULT_URL }}"
    --azure-key-vault-certificate "${{ secrets.SIGN_AZURE_KEY_VAULT_CERTIFICATE_ID }}"

The secrets are provided to you by the .NET Foundation.

You can double check that the package has been signed either using the NuGet package explorer and upload your signed package there. Which on the left side will show a “Digital Signatures” section showing something like:

Screenshot of NuGet Info showing Digital Signatures pane

Migrating AppCenter Analytics Events to Application Insights

|

With AppCenter closing 31st of March, I bet some people are scrambling to find out what to do instead. In my organization we’ve moved crashes into Sentry. However, there is still the question of what to do about Analytics events, which Sentry does not have an offering for.

We had AppCenter analytics events being exported into Application Insights. To patch this functionality, one could add not too many lines of code to replicate more or less what AppCenter exported.

In your Mobile App, you can add the Microsoft.ApplicationInsights NuGet package and create a TelemetryClient like so:

var config = new TelemetryConfiguration
{
    ConnectionString = MyConnectionString
};
telemetryClient = new TelemetryClient(config);

To set similar global properties on events which AppCenter did you add these properties:

telemetryClient.Context.Device.Id = your device id (you have to come up with something for this, I just save a GUID);
telemetryClient.Context.Device.OperatingSystem = GetOperatingSystem();
telemetryClient.Context.Device.Model = Microsoft.Maui.Devices.DeviceInfo.Manufacturer;
telemetryClient.Context.Device.Type = Microsoft.Maui.Devices.DeviceInfo.Model;
telemetryClient.Context.GlobalProperties.Add("AppBuild", Microsoft.Maui.ApplicationModel.VersionTracking.CurrentBuild);
telemetryClient.Context.GlobalProperties.Add("AppNamespace", Microsoft.Maui.ApplicationModel.AppInfo.PackageName);
telemetryClient.Context.GlobalProperties.Add("OsName", Microsoft.Maui.Devices.DeviceInfo.Platform.ToString());
telemetryClient.Context.GlobalProperties.Add("OsVersion", Microsoft.Maui.Devices.DeviceInfo.VersionString);
telemetryClient.Context.GlobalProperties.Add("OsBuild", Microsoft.Maui.Devices.DeviceInfo.Version.Build.ToString());
telemetryClient.Context.GlobalProperties.Add("ScreenSize",
    $"{Microsoft.Maui.Devices.DeviceDisplay.MainDisplayInfo.Width}x{Microsoft.Maui.Devices.DeviceDisplay.MainDisplayInfo.Height}");

static string GetOperatingSystem()
{
    var os = Microsoft.Maui.Devices.DeviceInfo.Platform.ToString();
    var version = Microsoft.Maui.Devices.DeviceInfo.VersionString;
    return $"{os} ({version})";
}

If you want to identify users you can do:

telemetryClient.Context.User.Id = userId;
telemetryClient.Context.GlobalProperties["UserId"] = userId;

Then to track events you can do:

public void TrackEvent(string eventName, IDictionary<string, string> properties = null)
{
    if (properties != null)
    {
        // AppCenter exported event properties in a nested way
        telemetryClient.TrackEvent(eventName,
            new Dictionary<string, string> { { "Properties", ToPropertiesValue(properties) } });
    }
    else
    {
        telemetryClient.TrackEvent(eventName);
    }
}

private static string ToPropertiesValue(IDictionary<string, string> dictionary) =>
    "{" + string.Join(",", dictionary.Select(kv => $"\"{kv.Key}\":\"{kv.Value}\"")) + "}";

This will get you most of the way and any dashboards you’ve made based on the data in customDimensions.Properties can be kept alive indefinitely or until you switch to something else.

Consuming private Swift Packages in GitHub Actions

|

I had a case for a native App we are working on where we already have some Swift Packages in Azure DevOps Repos, which we would like to consume in a project that lives in GitHub.

Locally on your machine this setup is pretty easy to work with if you are using something like Git Credential Manager. You just install the manager, use https urls and it will pop open a Web Browser to ask for your credentials when needed. This interactive way of authenticating is not really possible when you run in CI.

Stuff that didn’t work

I tried using git credential store and added something like this in the beginning of my workflow:

- name: Add Azure DevOps repo authentication for SwiftPM
  run: |
    git config --global credential.helper store
    git config --global --add http.https://[email protected]/$ORG/$PROJECT/_git/.extraHeader "AUTHORIZATION: Basic $BASE64PAT"
  env:
    ORG: orgname
    PROJECT: projectname
    BASE64PAT: $

This didn’t work at all, even though multiple source online says it was supposed to.

I also thought of using SSH keys, but I don’t want to do that since Azure DevOps Repos do not support LFS over SSH, so that would open up another can of worms for me.

.netrc to the rescue

.netrc is know from the *nix world for allowing you to store credentials for automatically log into services like ftp, http etc. So a help to avoid having to enter your credentials every time you want to connect to these services.

In GitHub Actions, there is a convenient action to create such file for you:

- uses: extractions/netrc@v1
  with:
    machine: dev.azure.com
    username: orgname
    password: $

Adding the .netrc now allows SwiftPM to resolve the package and CI is now happy

While this works for Swift Packages, it will likely work for other things for you as well.