Welcome to The MVP UX Research Playbook. If you missed part 1, Defining an MVP with mixed methods user research, check that out first. It explores the user research techniques to include in your definition phase so you can bring user input into the very beginning of the product lifecycle. In part 2, I’ll explore some of the most effective user research methods to include during your MVP design phase.
—
A common theme I hear in retrospectives is, “We should have talked to real users earlier in the process.”
A relentless focus on getting the product to market can lead to de-prioritising design research in the name of speed. But this is a false economy; the reason for getting the product to market is to learn. Design research offers insight earlier, with less investment and a lower level of risk.
Don’t wait to learn that a design doesn't work after it’s live. Learn it doesn’t work before you build it.
Conducting user research during the design phase enables early insight, giving your MVP the highest likelihood of succeeding in the real world. In this article, I’ll explore some of the most popular and effective research techniques to:
Understanding how users think is critical to delivering products that feel intuitively easy to use. A deep understanding of how users think about a subject enables journeys and experiences to be designed in a way that aligns with how they expect them to work. You can have the slickest user interface in the world, but if the information architecture (IA) doesn’t match user mental models, people will forever struggle to use the product.
Erika Hall writes in her book, Just Enough Research:
“In design, intuitive is a synonym for matches the user’s mental model.”
Early on in the design phase consider using two complementary research techniques, card sorting and tree testing, to do just this.
Card sorting is a research method that aims to understand how people think and categorise information. The method involves participants arranging cards that represent pieces of information, topics, or pages into groups that they think relate to each other. These groups can either be pre-defined (closed sort) or created by the participant (open sort).
Imagine you are designing the first-ever supermarket. You have hundreds of items and need to work out how to organise them so people coming into the shop can find what they need. Card sorting will illustrate how people would naturally group items and, therefore, where they would expect to find them.
For instance, some people would group items by the type of food. They would create a group for all the fruit, another group for the drinks, and another for the herbs. Other people might think differently and create groupings based on colours. They’d place all the red items together and all the green items together. You’d end up with a supermarket where kiwis, green tea, and basil are all in the same place. Other people may decide to organise the items alphabetically. So you’d walk into the shop, and the first items you saw would be asparagus, almonds, and apple pies.
By observing how people group related items, we can build an understanding of how they think about topics.
This understanding enables us to design the MVP in a way that makes finding information easy. It’s much more efficient to get this design right at the beginning than trying to retrofit changes once your MVP is live.
Tree testing complements card sorting and is typically run after a card sorting activity. Once you know how people think, you can build out one or more text-based information architectures to evaluate how easy or difficult it is for users to find what they’re looking for.
The method involves participants using a text-based information structure to indicate where they would expect to find different items, topics, or pieces of information. The text-based structure ensures that no visual components or styles influence the research.
Following our supermarket example, we could create three different text-based information architectures and evaluate which performs the best. For example:
Food type information architecture:
Colour-based information architecture:
Alphabetical information architecture:
Participants would be given an instruction such as “Where would you expect to find bananas?” and then you can evaluate each structure. Metrics to measure include how many successfully found the item, how long it took them to find the item, and how complicated the path they took was.
By understanding how people think and designing products around their mental models, the MVP you build will naturally feel simple and easy to use.
Getting these foundational design components right initially can create major efficiencies later in the product lifecycle.
I joined a team that delivered an MVP app with a basic set of features for customers. The customer feedback wasn’t great:
Every metric was saying the MVP feature set isn’t useful. However, one of the root causes of this feedback was that people couldn’t find the features they were looking for.
The product team was keen to quickly add more features to improve the app’s usefulness. Unfortunately, this just made the problem worse. The more features that were added, the more difficult people found the navigation, and the worse the reviews got.
The features had been hastily built without considering customers' mental models or the forethought of adding more features in the future. The focus was on building it fast over considering findability or future expandability. The information architecture and navigation had to be completely redesigned from the ground up to fix these problems. The MVP that was built couldn’t evaluate whether the feature set was right or if people found the features useful. It was a very expensive way to learn the information architecture was wrong.
The desire to deliver “something” fast ultimately slowed down the speed of delivering value to customers.
By skipping over early information architecture design and research, the team put an app into the market only to learn a bunch of lessons that they could have learned through user research much faster and cheaper. Ultimately, the product went backwards to try and go forward again.
Prototype testing involves sharing concept prototypes or wireframes with users on a device they would use. This type of research is effective at validating early design concepts or features before effort has gone into building them for real. It’s generally most impactful for a single prototype to include multiple features and concepts to test. Having all features in one prototype helps gather insight around importance and usefulness which is valuable when making prioritization decisions.
In the early stages of designing the MVP, prototype testing isn’t intended to test the fine details of a product; instead, it helps showcase a concept or idea for feedback. A simple greyscale clickable prototype is often all you need and can be quickly generated in design tooling. There is no need to crack out any code, but sometimes it can be faster to build in code, so by no means avoid code if that’s the right choice. Gather insight on how participants would use the prototype, what impact it would have on them, and how it could provide more benefit.
As designs progress, so will the feedback you’re looking for during research and the fidelity of your prototypes will also need to be enhanced. Usability insights around effectiveness, efficiency, and satisfaction will start to emerge as you become more interested in “how” the product is designed than “what” the product is designed to do. Getting feedback on the core design elements is critical before getting too deep into the build. A robust, well-tested component library or system will prove invaluable in later stages of the build and when making enhancements post MVP go-live.
Iterative research is key. You don’t want to put a complete UI in front of users only to find out the product doesn’t solve the problem you thought it would.
Prototype research feeding into an MVP must be conducted at the right time to be most effective. Starting with a good research plan helps, but most critically, it’s important to be flexible with an approach that delivers the most value to product teams when they need it for prioritisation and decision-making.
Being clear on research goals and timelines avoids the challenge of bringing insight to a product team only to have it be too late to take action.
I was working at a finance company that was hastily developing an app. The lack of digital services was resulting in a rising cost base as the company grew, and the board was keen to reduce the cost to serve customers by enabling digital self-service journeys as they’d seen competitors do.
The design strategy was to be a fast follower. There was no desire to innovate or change the world; simply replicate tried-and-tested industry standards to keep the cost from spiraling as the company grew. Given the time pressure, there was no desire for research, and given the design strategy for replicating known industry patterns, there was pushback on doing any research because everything was already well-tested in the market.
These weren’t the first stakeholders to tell me “no” and they won’t be the last. Fortunately, I found some research allies in the product team that gave me a little bit of budget and time — so little, no one senior would care that the research was happening. I reused an existing wireframe prototype and wrote a handful of tasks for participants to complete in an unmoderated study. I was sceptical at how much we’d learn, but some research is better than none.
What we learned initiated a simple design change that saved the organisation significant costs.
Navigation items were hidden behind a burger menu (a tried-and-tested design pattern), except for one button which launched a live chat feature. For any task that wasn’t immediately obvious how to complete from the app homepage, users went straight to the “chat with us” feature and said they would ask the live chat agent — even though the user could access the information in the app. Watching back the unmoderated recordings demonstrated that if it wasn’t obvious after a bit of scrolling around, users just assumed it couldn’t be done in the app. These are customers of a finance company where most tasks require customers to call up, so it’s not surprising that most users assumed that they needed to communicate with a person.
Simply making the navigation area more obvious encouraged users to explore the app more before using the live chat feature, which resulted in significantly fewer people choosing to use the live chat to complete the task. Subsequently, this reduced the demand for live chat agents and the cost of serving those customers.
Had the research not been conducted, instead of reducing the cost to serve customers, the organisation would have simply moved the cost from telephone to live chat. A simple design change originating from a shoe-string budget research study aligned the app with the strategic objectives of the organisation.
Building an MVP and getting it to market quickly is a fantastic way to learn about your product and the market it serves. However, significant lessons can be learned with mixed methods user research before the product is live. Don’t waste resources getting an MVP to market only to learn something that you could have learned much faster and cheaper with early user research.
Jack Holmes is an independent UX researcher and designer from Bristol, UK. For 10 years he's supported the biggest corporations and tiniest startups to understand people and build better products. He's chaired several UXPA International conferences and enjoys sharing insights and stories at events around the world.