Evaluating your MVP with mixed methods research

By
Jack Holmes
September 16, 2024
Evaluating your MVP with mixed methods research

Welcome to The MVP UX Research Playbook. In Part 1, I covered how to define an MVP with mixed methods user research, highlighting techniques for bringing users into the beginning of your product lifecycle. In Part 2, I investigated some of the most effective research methods to use during your MVP design phase, focusing on methods to evaluate design faster and reduce levels of investment and risk. In this final part, I’ll explore the methods for evaluating your live MVP.

Getting your MVP live is likely to have been your focus for months or even years. Every bit of time and resource has gone into getting your product to the point that it’s ready for people to use. Feelings are often mixed. On the one hand, you did it — hooray! On the other hand, you reminisce over features that didn’t make the cut and bugs you’ve had to accept. MVPs can often be a shadow of their lofty ambitions. As much as you might want to take some time off to recover, the product lifecycle has only just begun. 

Waiting for the first users to try your MVP can be nerve-wracking. If you’ve been using mixed-methods research throughout your definition and design phases, you should already have fairly good confidence in the product. You will also have a list of areas where you want to capture feedback to make future decisions. The lessons you're about to learn can only come from your product being live. However, it is imperative that you’ve thought about this moment before launch day.

If the first time you think about how you’ll evaluate your MVP is the day it goes live, you’re too late.

In this article, I’ll share the top three areas to consider before your MVP goes live:

  • Defining how you’ll collect data
  • Evaluating your MVP from multiple angles
  • Considering how to incorporate the learnings

That way, come go-live day things will run as smoothly as possible.

Define how you’ll collect data early

Eric Ries’ 2011 bestseller, The Lean Startup, describes the MVP as a version of a new product that allows a team to collect the maximum amount of validated learning about customers with the least effort. 

Planning how to collect data and extract insight is critical if you want to learn anything from your MVP. It’s common to deprioritise collecting data as you approach MVP go-live and pressure increases on resources. But remember, the sole reason you’re building the MVP is to generate learnings, and you can’t reliably do that without data.

A live MVP that doesn’t provide any learning opportunities is worthless.

A mixed-methods approach to data collection will provide the most comprehensive data set for generating insights and learnings. Consider how you can use different types of data to generate the learnings you’re looking for. This will be different for each team, product, and organisation, but I’ve listed some ideas to think about below:

  • User engagement data to track active users, session durations, and journeys.
  • Conversion data to understand if users are doing the activities you want them to.
  • Retention data to understand if users are coming back or not.
  • Behavioural data to understand what product aspects are used the most.
  • Technical data to understand technical performance, load times, and error rates.
  • User feedback to collect data on satisfaction, pain points, and suggestions.

Great Question's evaluation templates are a great way to start collecting some of this data.

Pro Tip: Descoping features from an MVP is inevitable as pressures to launch increase. By planning to utilise a range of data collection techniques when the descoping conversations start happening, you can drop some items to support the delivery without losing all the data needed for insight. If you only plan to collect data through one method and that gets descoped, you won’t learn anything.

Real world case study

I worked for a finance company that wanted to understand whether clients would use a mobile app. Building the app proved a lot more work than anyone expected. To keep costs and timelines under control, features were cut and cut again. All to get the app live.

The problem came at the end of the go-live day when a stakeholder asked how many clients had logged into the app. All analytics had been descoped. The only way to answer this important question was by having an engineer manually look through server logs. The app was live, but the data collection needed to answer the ultimate research question of whether people would use an app had been removed. When the descoping decision was made, analytics was an easy item to drop from the backlog.

In the hectic world of product delivery, that felt like a good call at the time to meet the deadline. But when you don’t have basic usage analytics at the end of go-live day, the impact of that decision comes sharply into focus.

Evaluate from multiple angles

Unless you’re an early-stage startup, your MVP will likely exist within an ecosystem of other features, products, and services your organisation delivers. If you’re working in a corporate environment, this ecosystem can be vast and ever-changing, but it’s important to understand. 

Building a bubble around an MVP and ignoring the broader context is often tempting but rarely ends well.

To truly understand how well your MVP performs, it’s important to look beyond the basic product performance metrics listed in the section above. Additionally, your MVP should be considered through the lens of the organisation's ultimate vision, mission, and culture. Below are some ideas to measure the success of your MVP through a wider lens:

  • Alignment with strategic objectives evaluates how the MVP contributes to the organisation's mission.
  • Organisational risk assesses how the MVP impacts existing risks or creates new ones.
  • Integration with the broader ecosystem considers how the MVP fits within existing products, services, and processes.
  • Scalability and sustainability examines how likely the MVP is to scale up and perform long-term.
  • The pace of iteration forecasts how quickly the MVP can be updated following feedback.

Evaluating your MVP through these metrics can often be more challenging and open to interpretation than some of the other metrics discussed. However, these are often the insights senior-level stakeholders will be most interested in. Explaining how your MVP contributes to strategic objectives, reduces risk, and increases sustainability will always pique executive interest more than any user engagement data.

For anything that isn’t immediately obvious how to measure (and a lot of this stuff isn’t), stakeholder surveys and interviews are a great place to start.

Pro Tip: As a user researcher, some of these metrics may take you outside your natural comfort zone (I know they did when I started considering them). Don’t try to tackle these things alone. Reach out to your business stakeholders for support. If you’re unsure where to start, your product owner is a great person to kick off the discussion with.

Real world case study

I was working on a banking app feature that enabled customers to open new accounts. The hypothesis was that opening accounts through an app would reduce the number of people visiting branches and calling contact centres. The account opening feature launched and was a big success. The annual sales target was smashed in the first month and satisfaction with the app increased.

However, all was not as rosy as it first seemed. The app did not impact the volume of accounts opened in branches or over the phone. Instead, it drastically reduced the volume of people using the website to open new accounts. The metrics from the app itself showed great performance, but the reality was that the feature simply shifted customers from one self-serve channel to another. This resulted in a new channel that didn’t deliver any commercial benefit because the people using it were already using the website anyway. The account opening feature had a significantly different outcome through that wider lens.

Consider how you’ll incorporate learnings

Your MVP is live, people are using it and data is flowing in from multiple angles. This can be quite a daunting time, especially if the feedback that’s coming through isn’t positive or what you were expecting. But you need to prepare for what you will do with the learnings and determine your next steps in different scenarios.

Preparing for how you’ll respond to different learning outcomes avoids knee-jerk reactions that can result in sudden feature bloat.

The worst-case scenario for your MVP is that a small amount of negative feedback comes in, and there’s a sudden panic resulting in solving this issue without considering how reliable the feedback is or the impact it has on the wider roadmap and product vision. This is especially true when the feedback comes from a senior internal stakeholder who hasn’t been involved in the product development. 

To ensure a robust approach to incorporating feedback and making product decisions, consider your approaches to the following activities before launch:

  • Analysis of data to agree with stakeholders on how feedback will be analysed and presented back to the team and wider stakeholders.
  • Prioritisation of feedback to determine how different types of data will be prioritised and compared. This is especially important if data sources start to show contradictory insight.
  • Backlog revision to plan how different types of feedback will impact what’s on the roadmap and prioritisation.

Having a plan for how feedback will be treated and how decisions will be made in response to the feedback puts you in a strong position to effectively manage all the incoming data and make the right decisions around the next steps for your product.

Real world case study

I was working on a consumer finance app (yes another finance app, it’s like I have a specialism). We were concerned the onboarding process was too complicated and lengthy. Discovery interviews with users identified that an easy setup was important, and the usability research conducted during design proved the setup process was anything but easy. Participants described it as long, clunky, and frustrating.

The product team had many ideas for improving it, but we weren’t sure whether improving the setup process should be prioritised above some of the other changes they also wanted. We tracked analytics to evaluate how long people were taking to set up the app, captured the level of abandonments, and included a little survey question at the end of the process asking people how they found it.

After launch, we could see that the time to set up was lengthy, but the abandonment figure was very low and the feedback was mostly positive. With that mixed methods data, we were confident to de-prioritise improvements to the setup journey and focus resources in other areas that weren’t performing as well.

The bottom line

It can be easy to forget that building an MVP is a process designed to generate learnings and iterate a product.

As important as getting the product live is, equal consideration should be given to what you will do with the learnings you gather.

Proper planning can help you avoid sudden knee-jerk reactions that cause more damage than good to the product. 

Jack Holmes is an independent UX researcher and designer from Bristol, UK. For 10 years he's supported the biggest corporations and tiniest startups to understand people and build better products. He's chaired several UXPA International conferences and enjoys sharing insights and stories at events around the world.

Similar posts

Try the all-in-one UX research platform