A lot of what I used to write in my blog, I find myself writing as a LinkedIn post. That is not the greatest of strategies given the lack of permanence to anything you post there, so I try to do better.
My big insight last week: the box and the arrow. Well, this is actually D and R from DSRP toolset for systems thinking applied to value generation. In an earlier place of work when I hired consultant, they came with very different ideals of hour reporting that then mapped to cost.A Seasoned Tester's Crystal Ball
This blog is about thinking of things past, present and future in testing. As much as I'd like to see clearly, my crystal ball is quite dim. Learning is essential and this is my tool for that. A sister blog in Finnish: http://testauskirja.blogspot.com
Wednesday, January 28, 2026
The Box and The Arrow.
Friday, January 16, 2026
The Results Gap
Imagine you are given an application to test, no particular instructions. Your task, implicitly, is to find some of what others have missed. If quality is great, you have nothing to find. If testing done before is great, none of the things you find surprise anyone. Your work, given that application to test is figure out that results gap, and if it exists in the first place.
You can think of the assignment as being given a paper with text written in invisible ink. The text is there, but it takes special skill to turn that to visible. If no one cares what is written on the paper, the intellectual challenge alone makes little sense. Finding some of what others have missed, of relevance to the audience asking you to find information is key. Anything extra is noise.
Back in the days of some projects, the results gap that we testers got to work with was very significant, and we learned to believe developers are unable to deliver quality and test their own things. That was a self-fulfilling prophecy. The developers "saving time" by "using your time" did not actually save time, but it was akin to a group of friends eating pizza and leaving the boxes around, if someone did not walk around pointing and reminding of the boxes. We know we can do better on basic hygiene, and anyone can point out pizza boxes. It may be that there is other information everyone won't notice, but one reminder turned to a rule works nicely on making those agreements in our social groups. With that, the results gap got to be the surprises.
Results gap is space between two groups having roughly the same assignment, but providing different results. Use of time leads to the gap, because 5 minute unit testing and 50 minute unit testing tend to allow for different activity. Availability of knowledge leads to the gap, because even with time you might not note problems without a specific context of knowledge. Availability to production like environments and experiences leads to the gap, both by not recognizing what is relevant for the business domain but even being able to see it due to missing integrations or data.
Working with the results gap can be difficult. We don't want us using so much time on testing that was already someone else's responsibility. Yet, we don't want to leak the problems to production, and we expect the last group assigned responsible to testing to filter out as much of what the others missed as possible. And we do this best by sizing the results gap, and making it smaller, usually through coaching and team agreements.
For example, realizing that by testing and reporting bugs, our group was feeding the existence of the results gap lead to a systemic change. Reporting bugs by pairing to fix them helped fix the root cause of the bugs. It may have been extra effort on testing on our group, but saved significant time in avoiding rework.
Results gap is a framing used for multiple groups agreed responsibilities towards quality and testing. If no new information surprises you production time, your layered feedback mechanisms bring you good enough quality (scoping and fixing enough) with good enough testing (testing enough). Meanwhile, my assignments as a testing professional are framed in contemporary exploratory testing, where I combine testing, programming and collaboration to create a system of people and responsibilities where quality and testing leaves less of a results gap for us to deal with.
Finally, I want to leave you with this idea: bad testing, without results, is still testing. It just does not give much of any of the benefits you could get with testing. Exploratory testing and learning actively transforms bad testing to better. Coverage is focused on walking with potential to see, but for results, you really need to look and see the details that the sightseeing checklist did not detail.
Tuesday, January 6, 2026
Learning, and why agency matters
Some days Mastodon turns out to be a place of inspiration. Today was one of those.
It started with me sharing a note from day-to-day at work, that I was pondering on. We have a 3 hour Basic and Advanced GitHub Copilot training organized at work that I missed, and I turned to my immediate team asking 1-3 insights of what they learned as they were at the session. I knew they were at the session because I had approved hours that included being in that session.
I asked as a curious colleague, but I can never help being also their manager at the same time. The question was met with silence. So I asked a few of the people one on one, to learn that they had been in the session but zoned out for various reasons. Some of the reasons included having hard time to relate to the content as it was presented, the French-English accent of the presenters, getting inspired by details that came in too slow taking time to search information online on the side and just that the content / delivery was not particularly good.
I found it fascinating. People take 'training' and end up not being trained on the topic they were trained on, to a degree they can't share one insight the training brought them.
For years, I have been speaking on the idea of agency, sense of being in control, and how important that is for learning-intensive work like software testing. Taking hours for training and thinking about what you are learning is a great way of observing agency in practice. You have a budget you control, and a goal of learning. What do you do with that budget? How do you come out having used that budget as someone who know has learned? It is up to you.
In job interviews when people don't know test automation, they always say "but I would want to learn". Yet when looking back at their past learning in space of test automation, I often find that the "I have been learning in past six months" ends up meaning they have invested time in watching videos, without having being able to change anything in their behaviors or attain knowledge. They've learned awareness, not skills or habits. My response to claims of learning in the past is to ask for something specific they have been learning, and then asking to see if they now know how to do it in practice. Most recent example in this space was me asking four senior test automator candidates on how to run robot framework test cases I had in IDE - 50% did not know how. We should care a bit more about our approaches to learning in terms of it is impactful.
So these people, now including me, had the opportunity of investing 3 hours to learning GitHub Copilot. Their learning approach was heavily biased on the course made available. But with strong sense of agency, they could do more.
They could:
- actively seek the 1-3 things to mention from their memories
- say they didn't do the thing and in the same time they did Y and learned 1-3 things to mention
- not report the hours into training even if the video was playing while they did something completely unrelated
- stop watching the online session and wait for video to have control over speed and fast-forwarding to relevant pieces
- ...
In the conversations on Mastodon, I learned a few things myself. I was reminded that information intake is a variable I can control from high sense of agency in my learning process. And I learned there is a concept of 'knowledge exposure grazing' where you are snacking information, and it is a deliberate strategy for a particular style of learning.
Like with testing, being able to name our strategies and techniques allows us control and explainability to what we are doing. And while I ask as a curious colleague / manager, what I really seek is more value for the time investment. If your learning teaches others in a nutshell, you are more valuable. If your learning does not even teach you, you are making poor choices.
Because it's not your company giving you the right trainings, it's you choosing to take the kinds of trainings in the style that you know works for you. Through experimentation you learn what are the variables you should tweak. And that makes you a better learner, and a better tester.
Saturday, January 3, 2026
The Words are a Trap
Wednesday, December 31, 2025
Routines of Reflection 2025
As I woke up to a vacation day 31.12.2025, a thought remained from sleep: I would need to rethink the strategies of how I use my time, and how I make my choices for the next year. I was trying to make sense into the year we are about to leave behind, and I knew that if there was a word I would use to describe it, it would most likely be consistent effort. On holidays and weekends, the consistent effort was into reading and I have been through more books in a year than I have read in the last ten combined (fiction, 51 titles on Kindle finished in 2025 and 73 in 2024, starting on the week I turned 50). On work, it was whatever was the theme of the week / month / quarter and I had adjusted direction learning so much throughout the year.
While efforts feel high and recognizable, I am not convinced with the strategies behind those efforts, and particularly the impact that I am experiencing or even aspiring. I am, after all, in a lovely unique career position where I have a lot of power over choices we make on testing, in an organization where I have a lot of learning to do on how to work on power with people, and particularly power with other organizations. Consulting, and my role in the AI-enhanced application testing transformation force every day to be one full of learning.
Describing the effort
As consultants, we track our hours used, leaving me with data of my year at work.
So know that I used 7% of my annual work hours on receiving visible training. This included:
- Participating in conferences I did not speak at: Agile Tampere, Oliopäivät Tampere, Krogerus Data Symposium
- Classroom training on Sales (did not like this), Delivery framework (liked this), start of Growth training (loving this).
- Ensemble learning for ISTQB Advanced Test Automation -certification and completion of full set of four Advanced certificates.
- Ensemble learning for CPACC Accessibility certification and completion of the certification, with start of accessibility advocacy that comes with holding the certificate without exam every five year.
- AI, particularly Agents in GitHub Copilot for non-automation use cases
- Python, teaching a 8-piece series of python for testing at work instead of complaining for same amount of time that some people did not know the basics - they do now.
- Contemporary exploratory testing, seeing versatile problems in target applications and combining automation into it
- Level of testing skill
- The controls at scale organization, allocation and targets
- Sense of agency with understanding of impacts
I'd like to think that some of the testing advice or inspirations I have provided this year have impacts that I only learn later on. Kind of like receiving a message this year from two people I worked with 10 years ago, one telling that I still impact their career on a regular basis due to the timing of when our professional paths crossed, and another telling me their organization has now better diversity mechanisms because our time together was one where I invested effort into letting people know I am not "guys" and that I would risk personal negative consequences for working for social justice.
So with all the reflection, I leave a call for myself and the community around me on finding out ways of fixing challenge 1 - skill of testing. While I have a sense of need of personal contribution in that space, I also know that the only way we solve problems in scale is democratization of knowledge and working together. So that is up next, going for 2026.
Closing off
I still think my reflection wins over what social-media-based AI tools can do. Top quote is my challenge 1.
Monday, December 15, 2025
Participant skills to retrospectives
I'm an avid admirer of retrospectives and the sentiment that if there was one agile practice we implement, let it be retrospectives. We need to think about how we are doing on a cadence, and we need to do it so that we ourselves get to enjoy the improvements. Thus retrospectives, not postmortems. Because let's face it: even if I learned today that we have a lessons learned from past projects database, for me to apply other people's lessons learned, it's likely that no amount of documentation on their context is sufficient to pass me the information. Retrospectives maintain a social context in which we are asked to learn.
Last week I argued that best of retros I have been in were due to great participants with an average (if any) facilitator. My experience spoken out loud resulted in a colleague writing a while blog post on facilitators skills: https://www.linkedin.com/pulse/retrospectives-why-facilitators-skills-matter-more-than-spiik-mvp--fmduf/?trackingId=yc302cSWzR0ZhHTSNoyVmQ%3D%3D
Success with retrospectives appears to be less of an issue with roles and responsibilities, than of a learning culture. When in a team where each gig is short and you could be out any moment (consulting!), it takes courage to open up about something that could be improved. The facilitator does not make it safe. The culture makes it safe. And facilitator, while they may hold space for that safety and point out lack of it, anyone can.
When someone speaks too much, we think it's the facilitator skills that matter in balancing the voices. Balancing could come from anyone in the team. Assuming the facilitator notices all things feels like too much on a single person.
Building cultures where work does not rely on a dedicated role is kind of what I'd like to see. Rotating the role on the way to such state tends to be a better version that having someone consistently run retros.
Having facilitation skills correlates with having participation skills too. At least it changes the dynamic from passive participant already afraid to express their ideas to an active contributor.
Friday, November 28, 2025
Observations of a habit transformation
A month ago, I gave a colleague an assignment. They were to create typescript playwright automation using Github Copilot and Playwright Agents. While making progress on the tests was important, learning to use agents to support with that work was just as important.
We had a scope for a test, which was one particular scenario previously created with a recording style automation tool. Recording took usually an hour, but there was no fixing the script. Whenever it would fail, a rerecording was the chosen form of maintenance. No one knew anymore if the thing that was recorded now matched what was recorded when the test was originally imagined. The format of the recording was an xml pudding where pulling out things to change took more effort than anyone had been willing to invest.
Halfway through the month, I checked with how the work was progressing to learn that it had seemed easier to work without agents due to familiarity. With a bit of direction that was no longer an option for continuing.
Three days before the deadline, I checked with how the work was progressing to learn the scope of the test had been forgotten and something new and shiny was being tested, mostly for playing with the Playwright Agents. With a bit of direction the scope was done by the review meeting.
Yes, I know I should be checking in more frequently. That option however was not a possibility.
Looking at what got done, I learned a few things though.
I learned that 134 LOC was added into 8 functions.
I learned three new significant capabilities (env configuration, data separation and parametrization, and fixtures) were added, and the scope of what the intended design of the original test had been had been captured.
I learned that making test reliable by adding verify for waiting to be at right place before proceeding had taken significant amount of work.
I learned that one type of element was never seen by the Playwright Record tool, and that required handcrafting the appropriate locators.
I learned that using agents comes with more context that I had not fully managed to pass on. If your agents out of the box are called planner, generator and healer, the idea that you might want to skip the planner or even write your own just following the existing as examples was not straightforward.
Seeing this unfold in hindsight from the pull requests, I modeled the process of how it was built.
First things were either recorded or prompted out after AI. Recording was clearly the preferred, controllable way of starting.
Then things were made work by adding things recording did not capture.
Then a lot of work was done on structure and naming.
There was a few iterations of making it work and making it pretty.
So I compared notes with some of the other assignments like this that I have given to people.
There were five essentially different ideas of how work like this would get done.




