🔬 How To Run UX Research In B2B and Enterprise. Practical techniques of what you can do in strict environments, often without access to users. 🚫 Things you typically can’t do 1. Stakeholder interviews ← unavailable 2. Competitor analysis ← not public 3. Data analysis ← no data collected yet 4. Usability sessions ← no users yet 5. Recruit users for testing ← expensive 6. Interview potential users ← IP concerns 7. Concept testing, prototypes ← NDA 8. Usability testing ← IP concerns 9. Sentiment analysis ← no media presence 10. Surveys ← no users to send to 11. Get support logs ← no security clearance 12. Study help desk tickets ← no clearance 13. Use research tools ← no procurement yet ✅ Things you typically can do 1. Focus on requirements + task analysis 2. Study existing workflows, processes 3. Study job postings to map roles/tasks 4. Scrap frequent pain points, challenges 5. Use Google Trends for related search queries 6. Scrap insights to build a service blueprint 7. Find and study people with similar tasks 8. Shadow people performing similar tasks 9. Interview colleagues closest to business 10. Test with customer success, domain experts 11. Build an internal UX testing lab 12. Build trust and confidence first In B2B, people buying a product are not always the same people who will use it. As B2B designers, we have to design at least 2 different types of experiences: the customer’s UX (of the supplier) and employee’s UX (of end users of the product). In customer’s UX, we typically work within a highly specialized domain, along with legacy-ridden systems and strict compliance and security regulations. You might not speak with the stakeholder, but rather company representatives — who regulate the flow of data they share to manage confidentiality, IP and risk. In employee’s UX, it doesn’t look much brighter. We can rarely speak with users, and if we do, often there is only a handful of them. Due to security clearance limitations, we don’t get access to help desk tickers or support logs — and there are rarely any similar public products we could study. As H Locke rightfully noted, if we shed the light strongly enough from many sources, we might end up getting a glimpse of the truth. Scout everything to see what you can find. Find people who are the closest to your customers and to your users. Map the domain and workflows in service blueprints and . Most importantly: start small and build a strong relationship first. In B2B and Enterprise, most actors are incredibly protective and cautious, often carefully manoeuvring compliance regulations and layers of internal politics. No stones will be moved unless there is a strong mutual trust from both sides. It can be frustrating, but also remarkably impactful. B2B relationships are often long-term relationships for years to come, allowing you to make huge impact for people who can’t choose what they use and desperately need your help to do their work better. [continues in comments ↓] #ux #b2b
Usability Testing Techniques
Explore top LinkedIn content from expert professionals.
-
-
We recently wrapped up usability testing for a client project. In the fast-paced environment of agency culture, the real challenge isn’t just gathering insights—it’s turning them into actionable outcomes, quickly and efficiently. Here’s how we ensured that no data was lost, priorities were clear, and progress was transparent for all stakeholders: 1️⃣ Organized Documentation: We broke the barriers— and documented on Excel sheet to categorize all observations into usability issues, enhancement ideas, and general comments. Each issue was tagged with severity (critical, high, medium, low) and frequency to highlight trends and prioritize fixes. 2️⃣ Action-Oriented Workflow: For high-severity and high-frequency issues, immediate fixes were planned to minimize potential impact. Ownership was assigned to specific team members, with timelines to ensure quick resolutions, in line with our fast-moving development cycle. 3️⃣ Client Transparency: A summarized report was shared with the client, showing the issues identified, the actions taken, and the progress made. This kept everyone aligned and built confidence in our iterative design process. Previously, I’ve never felt the level of confidence that comes from having such detailed and well-organized documentation. This documentation not only gave us clarity and streamlined our internal processes but also empowered us to communicate progress effectively to the client, reinforcing trust and showcasing the value of our iterative approach. It’s a reminder that thorough documentation isn’t just about organizing data—it’s about enabling smarter, faster decision-making. In agency culture, speed matters—but so does precision. How does your team balance the two during usability testing?
-
💡 System Usability Scale (SUS): A Simple Yet Powerful Tool for Measuring Usability The System Usability Scale (SUS) is a quick, efficient, and cost-effective method for evaluating product usability from the user's perspective. Developed by John Brooke in 1986, SUS has been extensively tested for nearly 30 years and remains a trusted industry standard for assessing user experience (UX) across various systems. 1️⃣ Collecting user feedback Collect responses from users who have interacted with your product to gain meaningful insights using the SUS questionnaire, which consists of 10 alternating positive and negative statements, each rated on a 5-point Likert scale from "Strongly Disagree" (1) to "Strongly Agree" (5). 📌 Important: The SUS questionnaire can be customised, but whether it should be is a debated topic. The IxDF - Interaction Design Foundation suggests customisation to better fit specific contexts, while NNGroup recommends using the standard version, as research supports its validity, reliability, and sensitivity. 2️⃣ Calculation To calculate the SUS score for each respondent: • For positive (odd-numbered) statements, subtract 1 from the user’s response. • For negative (even-numbered) statements, subtract the response from 5. • Sum all scores and multiply by 2.5 to convert them to a 0-100 scale. 3️⃣ Interpreting the Results • Scores above 85 indicate an excellent usability, • Scores above 70 - good usability, • Scores below 68 may suggest potential usability issues that need to be addressed. 🔎 Pros & Cons of Using SUS ✳️ Advantages: • Valid & Reliable – it provides consistent results across studies, even with small samples, and is valid because it accurately measures perceived usability. • Quick & Easy – requires no complex setup, takes only 1-2 minutes to complete. • Correlates with Other Metrics – works alongside NPS and other UX measures. • Widely respected and used - a trusted usability metric since 1986, backed by research, industry benchmarks, and extensive real-world application across various domains. ❌ Disadvantages: • SUS was not intended to diagnose usability problems – it provides only a single overall score, which may not give enough insight into specific aspects of the interface or user interaction. • Subjective User Perception – it measures how users subjectively feel about a system's ease of use and overall experience, rather than objective performance metrics. • Interpretation Challenges – If users haven’t interacted with the product for long enough, their perception may be inaccurate or limited. • Cultural and language biases can affect SUS results, as users from different backgrounds may interpret questions differently or have varying levels of familiarity with the system, influencing their responses. 💬 What are your thoughts? Check references in the comments! 👇 #UX #metrics #uxdesign #productdesign #SUS
-
You run a usability test. The results seem straightforward - most users complete the task in about 10 seconds. But when you look closer, something feels off. Some users fly through in five seconds, while others take over 20. Same interface, same task, wildly different experiences. Traditional UX analysis might smooth this out by reporting the average time or success rate. But that average hides a crucial insight: not all users are the same. Maybe experienced users follow intuitive shortcuts while beginners hesitate at every step. Maybe some users perform better in certain conditions than others. If you only look at the averages, you’ll never see the full picture. This is where mixed-effects models come in. Instead of treating all users as if they behave the same way, these models recognize that individual differences matter. They help uncover patterns that traditional methods - like t-tests and ANOVA - tend to overlook. Mixed-effects models help UX researchers move beyond broad generalizations and get to what really matters: understanding why users behave the way they do. So next time you're analyzing UX data, ask yourself - are you just looking at averages, or are you really seeing your users?
-
Ever noticed how two UX teams can watch the same usability test and walk away with completely different conclusions? One team swears “users dropped off because of button placement,” while another insists it was “trust in payment security.” Both have quotes, both have observations, both sound convincing. The result? Endless debates in meetings, wasted cycles, and decisions that hinge more on who argues better than on what the evidence truly supports. The root issue isn’t bad research. It’s that most of us treat qualitative evidence as if it speaks for itself. We don’t always make our assumptions explicit, nor do we show how each piece of data supports one explanation over another. That’s where things break down. We need a way to compare hypotheses transparently, to accumulate evidence across studies, and to move away from yes/no thinking toward degrees of confidence. That’s exactly what Bayesian reasoning brings to the table. Instead of asking “is this true or false?” we ask: given what we already know, and what this new study shows, how much more likely is one explanation compared to another? This shift encourages us to make priors explicit, assess how strongly each observation supports one explanation over the alternatives, and update beliefs in a way that is transparent and cumulative. Today’s conclusions become the starting point for tomorrow’s research, rather than isolated findings that fade into the background. Here’s the big picture for your day-to-day work: when you synthesize a usability test or interview data, try framing findings in terms of competing explanations rather than isolated quotes. Ask what you think is happening and why, note what past evidence suggests, and then evaluate how strongly the new session confirms or challenges those beliefs. Even a simple scale such as “weakly,” “moderately,” or “strongly” supporting one explanation over another moves you toward Bayesian-style reasoning. This practice not only clarifies your team’s confidence but also builds a cumulative research memory, helping you avoid repeating the same arguments and letting your insights grow stronger over time.
-
“I ran an experiment showing positive lift but didn’t see the results in the bottom line.” I think we’ve all had this experience: We set up a nice, clean A/B test to check the value of a feature or a creative. We get the results back: 5% lift, statistically significant. Nice! Champagne bottle pops, etc., etc. Since we got the win, we bake the 5% lift into our forecast for next quarter when the feature will roll out to the entire customer base and we sit back to watch the money roll in. But then, shockingly, we do not actually see that lift. When we look at our overall metrics we may see a very slight lift around when the feature got rolled out, but then it goes back down and it seems like it could just be noise anyway. Since we had baked our 5% lift into our forecast, and we definitely don’t have the 5% lift, we’re in trouble. What happened? The big issue here is that we didn’t consider uncertainty. When interpreting the results of our A/B test, we said “It’s a 5% lift, statistically significant” which implies something like “It’s definitely a 5% lift”. Unfortunately, this is not the right interpretation. The right interpretation is: “There was a statistically significant positive (i.e., >0) lift, with a mean estimate of 5%, but the experiment is consistent with a lift result ranging from 0.001% to 9.5%”. Because of well-known biases associated with this type of null-hypothesis testing, it’s most likely that the actual result was some very small positive lift, but our test just didn’t have enough statistical power to narrow the uncertainty bounds very much. So, what does this mean? When you’re doing any type of experimentation, you need to be looking at the uncertainty intervals from the test. You should never just report out the mean estimate from the test and say that’s “statistically significant”. Instead, you should always report out the range of metrics that are compatible with the experiment. When actually interpreting those results in a business context, you generally want to be conservative and assume the actual results will come in on the low end of the estimate from the test, or if it’s mission-critical then design a test with more statistical power to confirm the result. If you just look at the mean results from your test, you are highly likely to be led astray! You should always be looking first at the range of the uncertainty interval and only checking the mean last. To learn more about Recast, you can check us out here: https://lnkd.in/e7BKrBf4
-
During a Usability test, noticed that sometimes users tend to put on their 'best performance’ when they're being watched? You're likely witnessing the Hawthorne effect in action! Happens with us as well. When working from home, during meetings, you're more attentive, nodding more, and sitting up straighter, not just because you're engaged, but because you're aware that your colleagues can see you. This subtle shift in your behaviour due to the awareness of being observed is a daily manifestation of the Observation bias or Hawthorne effect. In the context of UX studies, participants often alter their behaviour because they know they're being observed. They might persist through long loading times or navigate more patiently, not because that's their natural behaviour, but to meet what they perceive are the expectations of the researcher. This phenomenon can yield misleading data, painting a rosier picture of user satisfaction and interaction than is true. When it comes to UX research, this effect can skew results because participants may alter how they interact with a product under observation. Here are some strategies to mitigate this bias in UX research: 🤝 Building Rapport: Setting a casual tone from the start can also help, engaging in small talk to ease participants into the testing environment and subtly guiding them without being overly friendly. 🎯 Design Realistic Scenarios: Create tasks that reflect typical use cases to ensure participants' actions are as natural as possible. 🗣 Ease Into Testing: Use casual conversation to make participants comfortable and clarify that the session is informal and observational. 💡Set Clear Expectations: Tell participants that their natural behavior is what's needed, and that there's no right or wrong way to navigate the tasks. ✅ Value Honesty Over Perfection: Reinforce that the study aims to find design flaws, not user flaws, and that honest feedback is crucial. 🛑 Remind Them It's Not a Test: If participants apologise for mistakes, remind them that they're helping identify areas for improvement, not being graded. So the next time you're observing a test session and the participant seems to channel their inner tech wizard, remember—it might just be the Hawthorne effect rather than a sudden surge in digital prowess. Unmasking this 'performance' is key to genuine insights, because in the end, we're designing for humans, not stage actors. #uxresearch #uxtips #uxcommunity #ux
-
When we run usability tests, we often focus on the qualitative stuff — what people say, where they struggle, why they behave a certain way. But we forget there’s a quantitative side to usability testing too. Each task in your test can be measured for: 1. Effectiveness — can people complete the task? → Success rate: What % of users completed the task? (80% is solid. 100% might mean your task was too easy.) → Error rate: How often do users make mistakes — and how severe are they? 2. Efficiency — how quickly do they complete the task? → Time on task: Average time spent per task. → Relative efficiency: How much of that time is spent by people who succeed at the task? 3. Satisfaction — how do they feel about it? → Post-task satisfaction: A quick rating (1–5) after each task. → Overall system usability: SUS scores or other validated scales after the full session. These metrics help you go beyond opinions and actually track improvements over time. They're especially helpful for benchmarking, stakeholder alignment, and testing design changes. We want our products to feel good, but they also need to perform well. And if you need some help, i've got a nice template for this! (see the comments) Do you use these kinds of metrics in your usability testing? UXR Study
-
The key to effective usability testing? Approaching it with a Human-Obsessed mindset. This is crucial. It determines whether your improvements are based on assumptions or real user insights. It guides how you engage with: → User needs → Common tasks → Pain points → and Preferences throughout their journey on your site. Usability testing isn’t straightforward. It requires a deep understanding of user behavior and continuous refinement. How do you start a Human-Obsessed usability testing approach? Follow these steps: 1. Set Specific Goals — Focus on areas like navigation and checkout. — Know what you aim to improve. 2. Match Test Participants to Users — Ensure your participants reflect your actual user base. — Diverse feedback is key. 3. Design Realistic Tasks — Reflect common user goals like finding a product or making a purchase. — Keep it real. 4. Choose the Right Method — Decide between moderated (in-depth) and unmoderated (scalable) tests. — Pick what suits your needs. 5. Use Effective Tools — Leverage tools like UserTesting or Lookback. — Integrate analytics for comprehensive insights. 6. Create a True Test Environment — Mirror your live site. — Ensure participants are focused and undistracted. 7. Pilot Testing — Run a pilot test to refine your setup and tasks. — Adjust before full deployment. 8. Collect Qualitative and Quantitative Data — Gather user comments and behaviors. — Measure task completion and errors. 9. Report Clearly and Take Action — Use visuals like heatmaps to present findings. — Prioritize issues and recommend improvements. 10. Keep Testing Iteratively — Usability testing should be ongoing. — Regularly test changes to continuously improve. Human-Obsessed usability testing is powerful. It’s how Enavi ensures exceptional user experiences. Always. Use it well. Thank us later.
-
Did you know that 88% of online consumers are less likely to return to a website after a bad user experience? That's right—poor usability isn't just annoying; it's costing you customers. Here are five critical considerations for usability testing that can make or break your product's success. → Consider all parties. Your product isn't just used by one type of person. If you're only testing with your primary user group, you're setting yourself up for failure. So, identify all the players in your ecosystem and include them in your testing. → Journey mapping. Create comprehensive journey maps that include touchpoints for all user types. Understand how different user roles intersect and influence each other, as these intersections often hide the biggest usability issues. → Happy path vs. recovery path. Don't just test the ideal user journey. Design tests to deliberately break things and see how your product handles errors. A good recovery experience can turn a potential "rage quit" into a moment of delight that keeps users engaged and invested. → Early and frequent testing. Begin usability testing early in the design phase to catch issues sooner and iterate quickly. Start with low-fidelity prototypes and test often. It's easier (and cheaper) to fix usability issues on a wireframe than on a fully coded product. → Rapid iterative testing. Consider rapid iterative testing instead of traditional methods. Test on Monday, make changes on Tuesday, test again on Wednesday, and so on. This approach allows you to fail fast, learn faster, and keep your team aligned throughout the development process. Which usability testing methods do you find most effective? Share your insights in the comments or DM me.