💡Combining Design Thinking, Lean UX, and Agile A combination of Design Thinking, Lean UX, and Agile methodologies offers a powerful approach to product development—it helps balance user-centered design with efficient concept validation and iterative product development. 1️⃣ User-centered foundation (Design Thinking): Begin by understanding the needs, emotions, and problems of the end-users. ✔ Start by conducting user research to identify and understand user needs. ✔ Gather insights through direct interaction with users (e.g., through interviews, surveys, etc.). Spend time understanding users' behavior, focusing on "why" rather than "what" they do. ✔ After gathering research, prioritize the most critical user insights to guide your design focus. Create a 2x2 matrix to prioritize insights based on impact (high vs low business impact) and feasibility (easy vs hard to implement) ✔ Begin brainstorming potential solutions based on these prioritized insights and formulate a hypothesis. Encourage cross-functional collaboration during brainstorming sessions to generate diverse ideas. 2️⃣ Hypothesis-driven testing (Lean UX): Lean UX helps quickly validate key assumptions. It fits perfectly between Design Thinking's ideation and Agile's development processes, ensuring that critical hypothesis are validated with users before actual development started. ✔ Formulate a testable hypothesis around a potential solution that addresses the user needs uncovered in the Design Thinking phase. ✔ Conduct experiment—develop a Minimum Viable Product (https://lnkd.in/dQg_siZG) to test the hypothesis. Build just enough functionality to test your hypothesis—focus on speed and simplicity. ✔ Based on the experiment's outcome, refine or revise the hypothesis and repeat the cycle. 3️⃣ Iterative product development (Agile): Once the Lean UX process produces validated concepts, Agile takes over for incremental development. Agile's iterative sprints will help you continuously build, test, and refine the concept. Agile complements Lean UX by providing the structure for frequent releases, allowing teams to adapt and deliver value consistently. ✔ Break down work into small, manageable chunks that can be delivered iteratively. ✔ Embrace iterative development—continue refining your product through iterative build-measure-learn sprints. Keep the user feedback loop tight by involving users in sprint reviews or testing sessions. ✔ Gather user feedback after each sprint and adapt the product according to the findings. Measure user satisfaction and track usability metrics to ensure improvements align with user needs. 🖼️ Design thinking, Lean UX and Agile better together by Dave Landis #UX #agile #designthinking #productdesign #leanux #lean
Multivariate Testing In UX
Explore top LinkedIn content from expert professionals.
-
-
🎢 How To Roll Out New Features Without Breaking UX. Practical guidelines to keep in mind before releasing a new feature ↓ 🚫 We often assume that people don’t like change. 🤔 But people go through changes their entire lives. ✅ People accept novelty if they understand/value it. ✅ But: breaking changes disrupt habits and hurt efficiency. ✅ Roll out features slowly, with multiple layers of testing. ✅ First, study where a new feature fits in key user journeys. ✅ Research where different user types would find and apply it. ✅ Consider levels of proficiency: from new users to experts. ✅ Actively support existing flows, and keep them a default. 🚫 Assume low adoption rate: don’t make a feature mandatory. ✅ First, test with internal employees and company-wide users. ✅ Then, run a usability testing with real users and beta testers. ✅ Then, test with users who manually opt in and run a split-test. ✅ Allow users to try a new feature, roll back, dismiss, remind later. ✅ Release slowly and gradually and track retention as you go. As designers, we often focus on how a new feature fits in the existing UI. Yet problems typically occur not because components don’t work visually, but rather when features are understood and applied in unexpected ways. Rather than zooming in too closely, zoom out repeatedly to see a broader scope. Be strategic when rolling out new versions. Especially in complex environments, we need to be rather cautious and slow, especially when operating on a core feature. That’s a strategy you could follow in such scenarios: 1. Seek and challenge assumptions. 2. Define how you’ll measure success. 3. Have a rollback strategy in place. 4. Test with designers and developers. 5. Test with internal company-wide users. 6. Test with real users in a usability testing. 7. Start releasing slowly and gradually. 8. Test with beta testers (if applicable). 9. Test with users who manually opt in. 10. Test with a small segment of customers first. 11. Split-test the change and track impact. 12. Wait and track adoption and retention rates. 13. Roll out a feature to more user segments. 14. Run UX research to track usage patterns. 15. Slowly replace deprecated flows with the new one. With a new feature, the most dangerous thing that can happen is that loyal, experienced users suddenly lose their hard-won efficiency. It might be caused by oversimplification, or mismatch of expectations, or — more often than not — because a feature has been designed with a small subset of users in mind. As we work on a shiny new thing, we often get blinded by our assumptions and expectations. What really helps me is to always wear a critical hat in each design crit. Relentlessly question everything. Everything! One wrong assumption is a goldmine of disastrous decisions waiting to be excavated. [continues in comments ↓]
-
When a business grows rapidly, the cracks in your processes start to show. That’s exactly what happened to us As our team scaled, it became clear: not everyone understood the hypothesis-generation process in the same way. This caused confusion, inconsistent problem-solving, and slowed down decision-making So, we developed a clear format to align everyone, newcomers and veterans alike, around structured, high-impact hypotheses. It starts with identifying the bottleneck In ecommerce, this might mean noticing that users drop off before completing a purchase The first instinct? "Add trust badges at checkout" But that’s too vague Is the real issue trust? A confusing checkout? Delivery costs? We learned to dig deeper: Problem: Low checkout conversion because users lack trust Action: Add trust badges (e.g., privacy policy, money-back guarantees) Expected result: Increase conversion from 20% to 40% 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 + 𝗔𝗰𝘁𝗶𝗼𝗻 + 𝗘𝘅𝗽𝗲𝗰𝘁𝗲𝗱 𝗥𝗲𝘀𝘂𝗹𝘁 This structure keeps our hypotheses focused and testable We prioritize using the ICE framework (Impact, Confidence, Ease). Doesn’t matter if we sum or multiply the values; the important part is consistent prioritization Then, we hold regular meetings: 1) Prepare hypotheses with a defined problem and goal 2) Refine and discuss existing ideas 3) Only brainstorm new ones when we’ve addressed the current list The result? A ready-to-implement hypothesis that’s documented from start to finish. This documentation becomes gold when reviewing what worked and what didn’t Fast growth demands clarity. Rebuilding internal processes isn’t just helpful, it’s necessary What’s your go-to method for hypothesis generation?
-
Ever noticed how two UX teams can watch the same usability test and walk away with completely different conclusions? One team swears “users dropped off because of button placement,” while another insists it was “trust in payment security.” Both have quotes, both have observations, both sound convincing. The result? Endless debates in meetings, wasted cycles, and decisions that hinge more on who argues better than on what the evidence truly supports. The root issue isn’t bad research. It’s that most of us treat qualitative evidence as if it speaks for itself. We don’t always make our assumptions explicit, nor do we show how each piece of data supports one explanation over another. That’s where things break down. We need a way to compare hypotheses transparently, to accumulate evidence across studies, and to move away from yes/no thinking toward degrees of confidence. That’s exactly what Bayesian reasoning brings to the table. Instead of asking “is this true or false?” we ask: given what we already know, and what this new study shows, how much more likely is one explanation compared to another? This shift encourages us to make priors explicit, assess how strongly each observation supports one explanation over the alternatives, and update beliefs in a way that is transparent and cumulative. Today’s conclusions become the starting point for tomorrow’s research, rather than isolated findings that fade into the background. Here’s the big picture for your day-to-day work: when you synthesize a usability test or interview data, try framing findings in terms of competing explanations rather than isolated quotes. Ask what you think is happening and why, note what past evidence suggests, and then evaluate how strongly the new session confirms or challenges those beliefs. Even a simple scale such as “weakly,” “moderately,” or “strongly” supporting one explanation over another moves you toward Bayesian-style reasoning. This practice not only clarifies your team’s confidence but also builds a cumulative research memory, helping you avoid repeating the same arguments and letting your insights grow stronger over time.
-
Stakeholders often focus on “how many” when presented qualitative research. Which is the wrong question to ask. Qualitative is about understanding the H (human) in HCI. The goal is to understand why they behave like that. When presenting research results: focus on showing clear patterns, supporting findings with evidence like quotes or observations, and connecting everything back to user behaviors and business goals, not sample sizes. Also, combine qualitative with quantitative to explain the what and the why. For example: - Quantitative shows what's happening: 72% abandon the goal-setting flow at account connection. - Qualitative reveals why: Users worry about security, are confused about account selection, and fear they can't reverse connections. - The powerful combination: "Our drop-off problem stems from specific trust concerns and mental model mismatches. By redesigning to address these specific issues, we can reduce the 72% abandonment rate." Beyond Numbers: How to Properly Evaluate Qualitative UX Research (9min) By Dr Maria Panagiotidi https://lnkd.in/gbqRneY4
-
✨ What does iterative multivariate testing look like? Take a look at the chart below. A couple of years ago we helped a client make a big decision: should they enter a new market? Serious investment would be required, and the company’s board wanted evidence that the company could generate demand in a market with a lot of established competitors. The company had ZERO knowledge of the new market (and the market had no knowledge of the company). Together, we developed hypotheses about what might work to position the company for success. I want to note the plural in that last sentence: HYPOTHESES. That’s how multivariate testing works. You test MULTIPLE hypothetical strategies at once with MULTIPLE audiences. It’s very different from how most people approach strategy, which is to test (if they test at all) that one perfect strategy. Multivariate testing of strategy is incredibly powerful. In the chart below, you can see the results of the first set of tests—those first lumps of traffic and revenue on the left. Clearly there’s something there, but nobody’s killing it, right? Wrong. Averages are deceiving. In the second wave of testing, we dropped the losing strategies and audiences and focused on the winners. Things started to pick up. By the third wave of testing (which was really a series of mini-waves), we weren’t just finding what worked—we were optimizing it. We call this sort of testing HEAT-TESTING, because it finds the ‘hot spots’ between strategy and audience. What does heat-testing tell you? Which audiences are most receptive How large those early audiences are How to position your product Which user flows are most productive in generating interest or revenue The cost to acquire a customer Whether you should move to the next step of a big investment I’ve been a strategist my entire career and here’s what I know: no amount of competitive analysis, focus groups, and surveys will deliver the one perfect strategy. Testing multiple strategies, ideally in an environment that gives you real-life, behavioral feedback, gives you raw material to iterate your way to a validated strategy. Always be testing.
-
How great hypothesis help you learn fast, and pick the right path. At Moonpig, I’ve been thinking a lot about how we can make testing more purposeful — not just to move metrics and see what happened, but to deeply understand customer behaviour, and why customers do what they do. So I recently introduced a 💡 Hypothesis Framework 💡 to help our product teams write better, bolder hypotheses, that help us understand the most important things about our customers. The goal? 1️⃣ Learn faster (through various means, both quant and qual) 2️⃣ Take bigger (but smarter) risks where it matters most 3️⃣ Focus on testing to learn, not just to prove 4️⃣ And shift from vague test ideas to more opinionated bets that help us pick a direction We want teams to ask themselves: “What would we learn about our customers if this fails?” and “what will I do differently based on the outcome of this test?” Because every test should move us forward — regardless of outcome… meaning we never ‘fail’, we always learn. The hypothesis canvas I created guides teams to define... 🧠 What they believe to be true about why our customers do what they do 🧪 How they will validate their assumption quickly (either through research, or testing) 📈 What they will measure (thinking about leading UX metrics such as engagement or clicks) 🤩 How they’ll know they are right (what % users need to agree or exhibit the desired behaviour) 🗺️ What they’ll do next based on what they have learned Finally, it’s important to remember that hypothesis aren’t created after you come up with the idea, they are created before. Hypothesis are tools to help you generate ideas to prove your assumptions true or false, ways to learn about your customers, not just to prove that your idea worked. #ProductDesign #ProductManagement #Experimentation #HypothesisDrivenDesign #Moonpig #innovation #uxdesign
-
🔍 𝗖𝗵𝗼𝗼𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗥𝗶𝗴𝗵𝘁 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗠𝗲𝘁𝗵𝗼𝗱𝗼𝗹𝗼𝗴𝘆: 𝗤𝘂𝗮𝗹𝗶𝘁𝗮𝘁𝗶𝘃𝗲, 𝗤𝘂𝗮𝗻𝘁𝗶𝘁𝗮𝘁𝗶𝘃𝗲, 𝗼𝗿 𝗠𝗶𝘅𝗲𝗱 𝗠𝗲𝘁𝗵𝗼𝗱𝘀? 🤔 One of the most critical decisions in research is selecting the right methodology, but how do you know which one fits your study best? The choice between qualitative, quantitative, or mixed methods can make or break your research impact. 𝗟𝗲𝘁’𝘀 𝗯𝗿𝗲𝗮𝗸 𝗶𝘁 𝗱𝗼𝘄𝗻: ✅ 𝗤𝘂𝗮𝗹𝗶𝘁𝗮𝘁𝗶𝘃𝗲 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 – Best for exploring human experiences, behaviors, and perceptions. Use interviews, focus groups, and case studies to dig deep into the "why" behind phenomena. 🔹 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: Understanding how remote work impacts employee well-being. ✅ 𝗤𝘂𝗮𝗻𝘁𝗶𝘁𝗮𝘁𝗶𝘃𝗲 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 – Perfect for testing hypotheses, measuring variables, and making data-driven conclusions. Surveys, experiments, and statistical analysis help you find the "what" and "how much". 🔹 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: Measuring the impact of AI-based learning tools on student performance. ✅ 𝗠𝗶𝘅𝗲𝗱 𝗠𝗲𝘁𝗵𝗼𝗱𝘀 – Why choose one when you can have both? This approach combines numbers and narratives to provide a well-rounded perspective. 🔹 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: Analyzing customer satisfaction with surveys (quantitative) and focus groups (qualitative) for deeper insights. 📌 𝗛𝗼𝘄 𝘁𝗼 𝗖𝗵𝗼𝗼𝘀𝗲? 𝗔𝘀𝗸 𝘆𝗼𝘂𝗿𝘀𝗲𝗹𝗳: ✔ What is my research goal? (Understanding vs. Measuring) ✔ What type of data do I need? (Words vs. Numbers vs. Both) ✔ What resources & time do I have? (Do I have the expertise and tools?) The right methodology strengthens your research credibility, so choose wisely. #ResearchMethods #Qualitative #Quantitative #MixedMethods #PhDLife #AcademicResearch
-
Classic A/B testing relies on SUTVA (Stable Unit Treatment Value Assumption), which assumes one user’s decision doesn’t influence another’s. But what if your product is a social network, marketplace, or delivery service? Imagine you’ve improved the post-ranking algorithm on LinkedIn. Users in Group A (new algorithm) creates more content now. But this content spreads to Group B (old algorithm), distorting the results due to network effects. Here are two main ways to tackle this: 1. 𝐂𝐥𝐮𝐬𝐭𝐞𝐫𝐢𝐧𝐠-𝐛𝐚𝐬𝐞𝐝 𝐞𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐬: Randomize groups of users (clusters) instead of individual users. For social networks, the most popular approach is to define clusters based on interaction frequency — those who engage more often stay together in one cluster. 2. 𝐒𝐰𝐢𝐭𝐜𝐡𝐛𝐚𝐜𝐤 𝐭𝐞𝐬𝐭𝐬: In this approach, everyone in the network receives the same treatment at any given time. Over time, we flip between test and control groups, compare metrics, and evaluate the impact. This is especially useful for location-based services (e.g., taxis or delivery). Even if you’re not working with a product that has potential network effects, understanding these methods will help you in future interviews!
-
Segmentation is one of those concepts that sounds simple until you actually try to do it properly. Most teams start with broad categories like age, location, or gender, but the real insight comes when you start looking at how users act - how often they visit, how recently they engaged, how much value they bring, and which patterns naturally form across those dimensions. The goal of segmentation isn’t to label users, it’s to understand the structure of their behavior. That’s what data-driven segmentation methods allow us to do. K-Means, for example, helps you find natural patterns hidden in behavioral data. You decide how many groups you want to explore, and the algorithm does the heavy lifting, assigning each user to the cluster that best represents their behavior. It’s simple, efficient, and powerful for large datasets where you want to explore engagement trends without predefining who belongs where. When you need to see relationships instead of just results, hierarchical clustering becomes more useful. It builds a tree-like view showing which users are similar and where meaningful divisions exist. You don’t need to commit to a single number of segments. You can cut the tree at different points to explore how granular your understanding should be. It’s particularly helpful for moderate datasets where interpretability matters as much as precision. Then there’s DBSCAN, a method designed for reality - where user behavior is messy, irregular, and full of noise. Unlike K-Means, DBSCAN doesn’t assume clusters are neat or circular. It groups users by density, identifying natural clusters and automatically separating outliers. This makes it especially valuable for complex behavioral or clickstream data where some users behave in ways that don’t fit any conventional pattern. If you want something more business-focused and immediately actionable, RFM segmentation (Recency, Frequency, Monetary) remains a classic for a reason. By scoring how recently and how often users engage, and how much they contribute, you can pinpoint who’s loyal, who’s at risk, and who’s gone silent. It’s simple but effective for linking behavior to ROI and retention strategies. Finally, once you have meaningful segments, classification models can keep them alive. You can train a model to automatically assign new users to the right segment as data flows in, turning segmentation from a static exercise into a living system that adapts as behavior changes.