Unit 9: Iterating and Refining Your MVP

  • Understand the prelaunch testing process to prepare for your MVP launch
  • Understand how to process and act on feedback without losing focus on your product’s core goals.
  • Identify critical usability barriers and prioritize the most important improvements.
  • Explore and apply methods for further testing your MVP

The Journey from Prototype to MVP to Launch

Refining an MVP is both an exciting and challenging part of the entrepreneurial journey. You’ve built a working prototype and tested it with initial users. Now it’s time to scale up testing, fix critical issues, and launch your MVP to real users. This lesson leads you from prototype testing to public launch, preparing you to pitch with real traction.

What You Accomplished
Where You Are Now
Where You Want To Go

Launching your MVP is a key part of your final pitch, as it will provide you with real launch data, not just testing data. 

You are closing in on the end of this accelerator, where you will need to demo your solution, and pitch your business. The testing from this lesson will feed into both of these key deliverables. It will also be helpful in informing your financial projections.

As you continue with more high-fidelity prototype testing and interactions with your early adopters, remember to gather the following to help bolster your story for your pitch and your MVP demo:

  • Demo videos of core features (practice demos) 
  • Video testimonials from 2-3 enthusiastic testers
  • User quotes about the problem (permission to use) 
  • Testing results, e.g. “We tested with 15 users, 80% said…”
  • User feedback (Market insight)
  • Screenshots of prototype (solution demonstration)

Launching the MVP

What is the difference between testing with prototypes and launching your product?

Prototyping/ Testing
Launching
Controlled Environment
Open to the public
Testers are recruited
Users come to you
Team observes users
Users use solution independently
Users know it's a test
Users expect it to work
Goal is to learn and improve
Goal is to usually to acquire and retain users (there may be other scenarios)
Focused Iteration

As you have learned from earlier prototyping and testing, development and testing is an iterative process. Each test gives you more information to go back and fix or improve upon your product. After each test, identify the most critical barriers, and decide what changes to make before running your next test. By focusing on critical issues, you prevent feature creep, which is what happens when a product keeps growing with extra features that make it more complicated without adding real value. Each change should strengthen the core experience, not expand it unnecessarily. 

Your highest priority after any test is addressing P0 Barriers—critical issues that make the product unusable or block the core value proposition. If a P0 exists, all other suggestions are temporarily irrelevant; you must fix the P0 immediately.

You can stay on track by ensuring your actions stay within these bounds:

Testing & Iteration Methods

This testing phase involves using a high-fidelity coded prototype, so you will use different testing methods than you did with low and medium-fidelity prototypes. Here are tests that you should conduct.

Internal Team Quality Assurance (QA) Testing

Before you gather external testers for your prototype, the team should do some preliminary, yet critical testing first. These tests will ensure that when you do get to testing with your end users, they will start off with a solid user experience – things should work! Below are 3 key tests you should perform before testing with users.

1. End-to-End Flow Testing

Does it work every time? This test ensures that the core features (e.g., generating output, saving data) don’t fail intermittently. Follow these steps:

  1. Write down complete user journeys. For example:
    • User signup → account creation → profile page
    • Returning user sign-in → add new post → view / edit / delete post
    • Returning user sign-in → view others’ posts → comment
  2. For each flow, test step-by-step. Execute each step in order and document any issues or unexpected behavior.
  3. Test for edge cases and errors. Try to intentionally break things (invalid input, slow network, repeated actions) and record how the system responds.
2. Reliability/Consistency Testing

Confirms that a new user can go from sign-up to core task completion to success metric. This validates the entire funnel and ensures your fixes from the last iteration didn’t break anything else (a common risk in rapid iteration). Here are the steps to test:

  1. Identify your most critical functions.
  2. Test each function 5-10 times consecutively.
  3. Note any issues or errors.  
  4. Go back and retest at different times of day, different days, etc to see if that affects how the product performs.
3. Cross-Device/Browser Testing

You need to ensure the product works for most of your target audience. A core task failing on Chrome, Safari, Edge, or a specific phone size can block many users. This makes cross-platform testing essential before a wider Beta test. Follow these steps:

  1. Identify platforms to test:
    • Device types: Depending on your solution, could be desktop, tablet, and/or phone
    • Multiple operating systems: Windows, macOS, iOS, Android
    • Multiple browsers: Chrome, Firefox, Edge, Safari, etc.
  2. For each platform, test:
    • Visual appearance: Does everything display correctly?
    • Core task completion: signup → login → main tasks
    • Deal-breakers:
      • Crash?
      • Freeze?
      • Any critical feature that does not work?
  3. Evaluate readiness:
    As long as your solution works well on 2–3 major platforms with no critical blockers,
    and you can warn users about limitations elsewhere, you can move to external testing.

Selecting Participants for User Testing


To do external testing, you of course need testers! Check back to your Lean Canvas to see who you identified as early adopters. You may have already reached out to them during low-fidelity testing.

You can also look back at your Customer Segments and Channels to identify potential users and where you can find them. Remember: user testing is quality over quantity. These early adopters are essential for moving your prototype toward a launch-ready state.

Before recruiting someone as a tester, make sure they truly match your early-adopter profile. If they don’t, don’t bother asking for their help. If they do meet the profile, put energy into recruiting them. 

Some strategies to find good testers and early adopters:

  • Personal outreach and 1-on-1 recruiting
  • Visit online communities where your users are
  • Leverage your network
  • Encourage users to invite other users

Usability Testing

 Unlike low-fidelity testing where users explored freely and gave general impressions, usability testing with high-fidelity prototypes is structured and task-focused. Participants are directly asked to complete specific tasks while you observe exactly how easily they succeed, where they struggle, and where they get confused. This approach provides concrete evidence of what works and what still needs refinement, rather than just collecting impressions or opinions. This will be your main testing method at this stage of development.

Usability testing highlights subtler friction points that may not have appeared in general feedback, helping you refine the product while keeping it minimal and purposeful. Usability testing is particularly valuable after your first round of feedback because it lets you confirm whether fixes actually solved critical barriers.

ACTIVITY 1

Usability Testing

Estimated Time: Multiple Days

Conduct a round of usability testing to test your high-fidelity working prototype. Here are the steps to run the test.

Clearly state what you want to learn. Are you testing a new feature, a fixed bug, or the entire onboarding flow?

From your early adopters, choose a small group (3–5 users) . And since you’ve already run low-fidelity testing, you can refine your selection based on what you learned.

Identify the core actions users must complete to experience the main value of your product. These should focus on the core functionality or areas that have been updated since your last iteration. Keep the tasks realistic and relevant to real user goals. 

  1. Pick tasks that reflect critical flows – Focus on what users must accomplish for the product to succeed. For example, if you created a finance budgeting tool, you task the users to set up a monthly budget and track an expense.
  2. Include tasks tied to significant changes – Focus on the updates or features that meaningfully impact the core experience. For example, if you made a language learning app, test to see if your new progress tracker is clear and adds to the user’s understanding. Do not test for something like a color change.
  3. Avoid overloading users with too many tasks – 3-5 key tasks are usually enough for a short session. This keeps the test manageable and ensures you get actionable insights without confusing or tiring participants.

What data will you track during the testing? This includes both quantitative and qualitative measures:

  1. Quantitative: Success Rate (Did they finish the task?), Time on Task, and Number of Errors or clicks.
  2. Qualitative: User comments, facial expressions, and signs of confusion or frustration.
  1. Develop Scenarios: Write clear, concise, and neutral instructions to give to the participant. Set the context for the task without telling the user how to perform it. For example, a key task shouldn’t be “Click the button.” It should be: “Use the new Generative AI Feature to produce a result that is suitable for publication/sharing.” This forces them to test the quality of the output, not just the button function.
  2. Set up Recording: Arrange cameras or screen-sharing software (with the user’s permission) to record the session for later analysis.
  3. Prepare the Test Space: Ensure a quiet environment where the user can concentrate. Have a separate observer/notetaker if possible.
  4. Gather Consent: Get signed informed consent for the session and for recording it.
  1. Pre-test questionnaire (optional): Get some background information from the user, how they currently deal with the problem, etc.
  2. The “Think-Aloud” Protocol: Instruct users to verbally express their thoughts as they navigate the product: what they are looking at, what they are trying to do, and why they are confused or satisfied.
  3. Observation and Facilitation: You will have two roles as facilitators of the testing.
    1. Observer: Stays neutral and only intervenes if the user is completely stuck and cannot proceed.
    2. Notetaker: Records observations, quotes, errors, and task timings without influencing the user.
  4. Post-Test Interview: At the end, you can ask open-ended follow-up questions about their overall experience, what they liked most, and what they found most frustrating.
  1. Review Recordings and Notes: Look for patterns across multiple users rather than relying on a single experience. A single user struggling is an anecdote; 3 out of 5 users struggling with the same step is a critical issue.
  2. Identify Critical Issues: Focus on usability problems that prevent users from completing key tasks or cause significant frustration.
  3. Synthesize Findings: Consolidate the data into a report that ranks issues by severity and includes the metrics you tracked (e.g., “Only 40% of users successfully completed Task 2”).

Go back to your code and your design, and fix any critical issues identified by your users. Implement the most severe, high-impact fixes first (the critical usability barriers). You’ll have to balance the limited time you have with the crucial insights you’ll gain from iterating. It’s highly recommended that you document your decisions and reasoning to guide future iterations and maintain focus on the product’s core goals.

Repeat the process once you’ve fixed the critical issues, this time do a more focused usability test to verify that your fixes actually solved the problems without introducing new ones.

Navigating Feedback

After one round of usability testing, you should now have feedback for your initial coded prototype. Perhaps the feedback was discouraging because users didn’t show interest in your product. Perhaps it was overwhelmingly positive but lacked actionable suggestions. You’re likely somewhere in the middle but navigating through it all can be tricky.

Think back to our discussion about the Entrepreneurial Mindset, where among the key qualities included were resilience and curiosity.

Discouraging Feedback?

Use it as an opportunity to identify gaps and critical barriers in your product. Focus on what must be fixed to make your product functional and reliable. These are opportunities to learn and improve.

Overly Positive Feedback?

Treat positive feedback as validation that you are on the right track, but continue refining. Broaden your audience pool and stay curious on how to refine your product more.

Take in the feedback and consider it carefully and constructively. However, it’s also important to distinguish between understanding feedback and acting on it. Your product at this stage should still be minimal but purposeful, ensuring each change adds real value for your users. 

Making Sense of Conflicting or Overwhelming Feedback

You may see conflicting input, a high volume of suggestions, or both at the same time. The flood of feedback may come from many sources, such as early users, peers, and mentors. It can feel overwhelming, but here’s one way to start making sense of it. 

Start by categorizing feedback according to the following questions:

Severity

Frequency

Original Goal

Does it stop users from completing the core experience?

Are multiple people reporting the same issue?

Does it align with your product’s purpose or long-term goal?

Once categorized, using the information as evidence, you can apply the MoSCoW method to decide where to take action.

MUST

SHOULD

COULD

WON’T

High severity or high frequence, clearly linked to original goal

Moderate impact, or improvement

Low severity, nice to have

Off-goal, too risky to implement pre-launch

Navigating Feedback for a Teacher Platform

You built a web platform for teachers to share lesson ideas. The feedback was positive about the concept, but several teachers stopped using it after the first visit. It turned out that uploading resources took too long because larger files failed to upload without explanation.

This is a critical usability barrier or P0 barrier, a technical or design issue that prevents users from doing the most essential task. Addressing it doesn’t mean adding new features, just ensuring the basic experience works reliably. This will go in the MUST column and should be addressed quickly.

teacher lesson plan app

A/B Testing

A/B testing is a way to compare two versions of your product to see which one performs better. It helps you move from qualitative insights gathered through usability testing to more quantitative data.

The idea is simple: provide two versions of an element and show each version to separate user groups. By tracking clear metrics—such as task completion, click-through rate, or time on task—you can determine which version better supports the core experience. A/B testing reduces assumptions, confirms whether a change actually improves usability, and offers concrete guidance on what to prioritize next.

However, it requires a large sample size to produce reliable data. At this stage, you may not have enough users. Usability testing remains your most valuable method, providing rich feedback from a small number of participants. A/B testing becomes powerful once your MVP launches and you begin acquiring users. Because it can be automated, it’s an efficient way to gather data rapidly and at scale.

ACTIVITY 2

A/B Testing (optional)

Estimated Time: 2 hours+

If you are at a stage where you can get enough testers, you can run some A/B tests. Explore how small, deliberate changes to your product can impact user experience. You’ll see which approach better supports your users’ goals and gather concrete insights to guide your next iteration.

Answer the questions on the following worksheet to guide your process. 

Landing Page

A landing page is a highly focused webpage that presents your product concept and encourages users to take a specific action, such as signing up for updates or requesting a demo. A landing page at this stage is more of a way to test demand than to test usability of your product.

You can really expand your potential audience here, since landing pages can reach people beyond your initial testers and attract potential users who may not have encountered your product otherwise.

At this stage, the landing page should reflect the refinements you’ve already made rather than testing entirely new ideas. Tracking user interest and behavior on the page provides concrete feedback that guides your next round of iteration.

Even if your product is not a website, you can still create a landing page to clearly communicate your idea. For example, a physical product might include photos, describe features, and offer options to pre-order or join a waitlist.

A landing page does not need to be polished—minimal is perfectly fine. Your goal is insight and validation, not production-ready visuals, so you shouldn’t spend more than a few hours on it.

Here is the suggested structure:

1

Headline

One sentence value proposition

2

Problem

2-3 sentences about the pain

3

Solution

3 bullet points what your product does

4

Social Proof

“Join 50 students already interested” (if true)

5

Call to Action

With this in mind, choose one strong call-to-action for visitors to take. Here are some examples and what data they can provide you:

  • Sign up for updates or early access – Lets you see who is genuinely interested and builds a list of potential early adopters.
  • Join a waitlist – Shows commitment from users without requiring the product to be fully ready.
  • Request more information or a demo – Gives you a sense of curiosity and engagement while opening a channel for qualitative feedback.
  • Pre-order (if applicable) – Measures willingness to invest or commit (which is distinct from the commitment you get from joining a waitlist).
Tools To Make Your Landing Page
  • Carrd (free) – Simplest option
  • Google Forms + simple HTML
  • Notion public page
  • Mailchimp landing page
Driving Traffic to Your Page
  • Post in relevant WhatsApp communities
  • Share on social media (X, TikTok, LinkedIn)
  • Post in Facebook groups, university networks
  • Direct message potential users
  • Need 100+ visitors for meaningful data

Key Metrics

Tracking the right data helps you make informed decisions for your next iteration rather than relying on impressions or gut feelings. Here are some data points to consider:

As you continue refining your MVP, remember that the methods we’ve covered here are just a few of the tools you can use to gather insights and validate your product. Each team and product is unique, so it’s important to select the approaches that make the most sense for your goals, timeline, and audience.

Stay intentional with your iterations and ensure your product continues to improve while maintaining a lean, purposeful core. Continuous testing, observation, and iteration are essential habits for any entrepreneur, and the skills you develop here will guide your product through multiple cycles of refinement toward a more mature and successful version.

These tests will hopefully set you up to launch your MVP by the end of the accelerator. This should be a goal, as it will be viewed more favorably by the judges. Even if you don’t get to a full launch by the end of the accelerator, exhibiting your iterative process with actual users will be vital for showing your progression and commitment to succeeding with your venture.

Reflection

Take a few moments to reflect on your recent testing and iteration experience. Consider the following:

Sunset and reflection over lake
01
Key Issues
Which issues were the most important to address in order to improve the core experience?
02
Effective Methods
Which methods gave the most actionable insights for your product? How might you want to adjust your approach for the next iteration?
03
Next Steps
Based on what you learned, what are the top 1–3 actions to take before your next iteration?

Additional Resources

Wix.com has a list of strong landing pages that you can draw inspiration from. Take a look here.

The following playlist by the Nielsen Norman Group does a good job providing an overview on how to conduct Usability Tests, especially in a remote setting.