A Weird Imagination

Avoid desktop switching for KeePassXC dialogs

Posted in

The problem#

I make heavy use of virtual desktops, so I use browser windows on multiple desktops. When logging into a site that uses passkeys, KeePassXC pops up a dialog on the desktop its main window is on, which is usually not the desktop my browser window is on. Then I have to switch over to that desktop to interact with the dialog and then back to the desktop I was working on. There is a GitHub issue reporting this, so it may be fixed in a future release.

The solution#

Use the script keepassxc-dialogs-all-desktops.sh:

#!/usr/bin/bash
# Start KeePassXC if it isn't already running.
if ! pgrep 'keepassxc$' >/dev/null 2>/dev/null
then
    nohup keepassxc "$@" >/dev/null 2>/dev/null &
fi
pid="$(pgrep 'keepassxc$' | head -1)"

xprop -spy -root _NET_CLIENT_LIST |
    while read -r line
    do
        # If KeePassXC exited, then we're done.
        if ! ps -p "$pid" >/dev/null 2>/dev/null
        then
            exit
        fi

        # Only look at last window in list.
        winid="${line/#*, /}"
        if [[ $(xdotool getwindowname "$winid") == \
            "KeePassXC - "* ]] && [[ \
            $(xdotool getwindowpid "$winid") == "$pid" ]]
        then
            # Set visible on all desktops.
            xdotool set_desktop_for_window "$winid" -1
        fi
    done

The script will start KeePassXC if it isn't already running, and, until KeePassXC closes, it will watch for dialog windows it opens and set them to be visible on all desktops, so they will appear on the current desktop.

The details#

Read more…

Isolating application under development

Posted in

The problem#

I was working on a bugfix for a GUI application I use, but I didn't want my development to interfere with the instance I was actually running. The application stores state that I didn't want to be modified by the in-development code in a few different places (data, settings, cache) and also tries to be smart and refuses to open a second instance if it's already instead pointing to the already open instance. So I wanted a way to test it completely separately so it wouldn't trample over the copy I was actually using.

The solution#

I'll discuss alternatives and why I didn't use them below, but I ended up setting up a dummy user that I named lifereatest that's only used for running the code being tested and granting that user access to display GUI programs:

$ sudo adduser lifereatest
$ xhost +SI:localuser:lifereatest

Then after each change, I put the built binary somewhere that user can actually read it and used sudo to run it as that user:

$ cp build/liferea /path/lifereatest/can/access/
$ sudo -u lifereatest /path/lifereatest/can/access/liferea

The details#

Read more…

How *NOT* to set up a makeshift webcam stream

Posted in

The problem#

Last time, I described how I set up a way to stream a web cam attached to my desktop to a Home Assistant dashboard on my smartphone, so I could view it from anywhere on my local network. More specifically, I described the actual good solution I settled on and not the detour I took setting up different ways to set up video streaming that don't work as well.

The solution#

This is a post about what not to do. See my previous post for what to do.

The path I took is that since I had heard of RTSP, I thought setting up nginx to serve using that would work. I ended up going down a rabbit hole of trying to set up the RTMP module for DASH and HLS streaming. It did end up sorta working, but being rather flaky and laggy. I don't mean to say these are bad or obsolete technologies, but they were definitely not the right tool for what I wanted to do.

The details#

Read more…

How to set up a makeshift webcam stream

The problem#

I wanted to set up a way to get a live video stream of one part of my living space no matter where I was at home, and I wanted that stream to be easily viewable by any member of the household, but not by anyone else. Since where I wanted the camera pointed was near my desktop which I have a USB webcam for, I figured it should be easy to setup streaming from that webcam to my phone over Wi-Fi. While specialized hardware for this purpose is sold as security cameras or baby monitors, I wanted to see what I could do with the hardware I already had.

The solution#

Install go2rtc via the Home Assistant WebRTC Camera package through HACS. Once setup, the camera stream can be shown in the Home Assistant UI. If you don't already have Home Assistant set up for some other reason, it probably makes more sense to install go2rtc directly.

To configure streaming a webcam through go2rtc, open its web UI at http://localhost:1984/1 and look at the "Add" tab to see the options of which streams are available. A local webcam will be under "V4L2 (USB)". Note that if go2rtc is inside a Docker container (as my Home Assistant setup is), then you will need to grant access to the relevant device, probably /dev/video0 or similar, to the Docker container. That URL can be added to the streams section under the "Config" tab:

streams:
  mycamera:
    - v4l2:device?video=/dev/video0&input_format=mjpeg&video_size=1920x1080&framerate=5

Use the URL you found under the "Add" tab, not the one here which is for my setup, and adjust the video_size and framerate as desired. You can name the stream (here mycamera) anything you want. Then after you click the "Save & Restart" button on the "Config" tab, the camera should appear on the "Streams" tab. You can click the associated "stream" link to watch the stream.

In Home Assistant, you can make a dashboard with a card of the "WebRTC Camera" type and the URL set to mycamera (or whatever you named it):

type: custom:webrtc-camera
url: mycamera

The details#

Read more…

Linting dates in post filenames

Posted in

The problem#

The actual filename structure of my blog posts contains the publishing date while the actual publishing date used by Pelican is in the file in a line starting Date:. For example, this file is content/2025/1026-lint-blog-dates.md and contains the header line

Date: 2025-10-26 02:00

Since I write posts to be published in the future, sometimes I reschedule future posts, which requires both renaming the file and updating the Date: line. And any time you violate DRY, there's an opportunity for one of the copies of information to be out of date. In this case, that means that I could think I had rescheduled a post but in fact only renamed the file without updating the Date: line.

The solution#

Put the following script in check_filenames.sh in the root of your blog repository. You will likely want to modify it based on what rules apply to your blog:

#!/bin/bash

error=""
cd "$(dirname "$0")"/content || exit
for year in 20*
do
  for file in "$year"/????-*.md
  do
    base="$(basename "$file")"
    month=${base:0:2}
    day=${base:2:2}
    expected="Date: $year-$month-$day 02:00"
    actual=$(awk 'BEGIN { RS="\n\n" } { print $0; exit }' \
      "$file" | grep '^Date:')

    if [ "" == "$actual" ]
    then
      echo "$file" is missing a \"Date:\" line.
      error=1
    elif [ "$expected" == "$actual" ]
    then
      true # all good
    else
      echo "$file" has unexpected publish date/time: \
        \""$actual"\", expected \""$expected"\"
      error=1
    fi
  done
done

if [ $error ]
then
  echo
  echo "Some errors found. Please review above."
fi

The details#

Read more…

Troubleshooting Syncthing disconnection

The problem#

As previously mentioned, I use Syncthing to synchronize various files between different computers within my household, some of which have owners who prefer to use Windows. I recently noticed the files weren't getting synchronized anymore, and upon checking on the Syncthing dashboard at http://localhost:8384/, I saw errors similar to this Reddit post and these Syncthing forum posts. The connection status was "Disconnected (Inactive)" and the "Address" section showed a bunch of different ways it was trying to connect with errors like "no recent network activity", "i/o timeout", and "connection refused". Both computers showed similar errors. I was confused since I knew it worked when I first set it up, and I didn't think I had changed anything relevant.

The solution#

After working through some troubleshooting steps, the eventual fix was to adjust the firewall on the Windows computer. Specifically, I marked my home network as a "private", not "public", network. That option appears in the "properties" for the network connection along with a short description of what that setting does, including that it lets you "use apps that communicate over this network". Windows Firewall has different rules for "private" and "public" networks, so this effectively tells the firewall to stop blocking Syncthing and other applications that communicate over the local network.

The details#

Read more…

Troubleshooting website connection failure

Posted in

The problem#

A website that I had been using every day suddenly stopped loading. Connecting to it just timed out, like the server was completely down. I figured it was a temporary interuption in service, but several hours later, it was still not working and there was no status message on the organization's main site about an outage. So I was suspicious the problem was somehow on my side.

The solution#

There's a lot of different reasons for the same behavior, so I can't offer one solution. But in this case, I reset my internet connection by clicking the "reconnect" button in my router's WAN settings. With my ISP setup, that grants a new IP address, which somehow fixed the issue.

The details#

Read more…

Markdown code blocks in footnotes

Posted in

The problem#

In a recent post, I wanted to include a large code block of a log. In order to not make the reader scroll past that long block, I put it in a footnote. But triple-backtick (```) syntax for code blocks doesn't work in Python Markdown footnotes:

[^1]: Broken footnote, do not use:
    ```py
    print("Hello World!")
    ```

outputs the code block before the footnote, not inside it.

The solution#

For code blocks in footnotes, indent the code lines (and use ::: on the first line to specify the highlighting mode if desired). For example, the Markdown for the footnote in that post starts as follows:

[^full-journalctl-log]:
    The full log:

        :::text
        Aug 09 02:43:20 host systemd[1]: Starting unifi.service - unifi...
        Aug 09 02:43:20 host unifi-network-service-helper[2203]: grep: /var/lib/unifi/system.properties: Permission denied

The details#

Read more…

Saving shell transcripts

Posted in

The problem#

When writing a blog post like last time's, I often will be at least partway through the process before I realize it's interesting enough to write a post on. Then I need to somehow go back and reconstruct a narrative of the troubleshooting steps I performed.

The solution#

As long as I still have the terminals or screen sessions open, I can at least capture a snapshot of the scrollback history that's still available.

In Xfce Terminal (and likely similar in other terminals), I can save the history by going into the "Edit" menu and selecting "Select All" and then "Copy" (plain text) or "Copy as HTML" (to also capture styles) and pasting into a file or writing the clipboard to a file using xclip:

xclip -o -selection clipboard > some_file.html

In screen, the hardcopy command can be used to dump the full history to a file. Type Ctrl+a, : to get into command mode and run the command hardcopy -h to include the entire scrollback buffer. Unfortunately, this is plain text only.

If you use tmux instead of screen, the equivalent is the capture-pane command which does support saving styles if you provide the -e option. Note this encodes the styles so cating the file to a terminal will look right, not in HTML as Xfce Terminal does; you could convert it to HTML using aha.

The details#

Read more…

Unifi network controller failing to start

The problem#

I use Ubiquiti-branded network products for my switches and wireless access points and use the UniFi Network controller software to configure them.1 I noticed the web interface wasn't working and checked the status the service and saw the logs showed it starting the service and no further information (and looking at the log files didn't show it generating any errors either):

$ sudo systemctl status unifi.service
[...]
Aug 16 14:32:13 host systemd[1]: Starting unifi.service - unifi...

Specifically, this was on version 9.0.114-28033-1 of the unifi Debian package provided by Ubiquiti.

The solution#

After multiple steps of investigation, I finallly got the Unifi service running by changing the ownership of all of its data files to the unifi user/group:

sudo chown -R unifi:unifi /usr/lib/unifi/{data,logs}

But this might not have been the best fix: the first thing it did when I opened the web interface was tell me there's no database and offered to restore from a backup. Luckily I don't change the settings much and it did regular automatic backups (in addition to a slightly older manual backup I had), so I didn't lose anything. Also, it's possible the data loss was due to some other troubleshooting step, not that one. As always, keep backups and before making a change consider making an extra copy of the directory before the change (or making a snapshot if you're using ZFS or another filesystem that supports them).

The details#

Read more…