Q&A with Dan Olsen
Dan is author of The Lean Product Playbook: How to Innovate with Minimum Viable Products and Rapid Customer Feedback. He joined us to talk about how to find your MVP, the Oprah vs. Spock effect and the surprising reason why most entrepreneurs fail.
There are a number of books that focus on Lean processes. Why focus just on Minimum Viable Product?
The problem with Minimum Viable Product is that there’s no formal definition. As a result, you'll see people getting into flame wars over what an MVP is and isn't: Can a landing page be a MVP? Does an MVP need to be a finished product? There are a wide range of interpretations around MVP, which is why it's worth a closer look.
I prefer to focus on MVP tests, which is an umbrella that can accommodate all of those arguments. Yes, a landing page - or even wireframes for that matter - can be an MVP test because it you learn from it. A landing page isn’t an actual product, but it can be enough to get actionable feedback on how to proceed with your development: an idea of conversion rates, how effective your marketing is.
Experienced practicioners realize that you’re not going to get it right the first time - they realize that, instead, you should just get something going and iterate. Get that torpedo going in the right direction, and steer from there. That’s the underlying idea behind Minimum Viable Product - embracing that iterative approach.
Lean Product Playbook is an attempt to explain Minumim Viable Product as a science, rather than a theory, in order to increase your odds of success. We all know that building products is hard - you won't hit the bullseye the first time you shoot. So rather than trying to establish a definition, let’s come up with ways to identify the risky hypothesis, and the cheap, fast ways to test them.
You've had an interesting career, working first at Intuit and then at many startups building products for web and mobile. You also have degrees in business and engineering. How did those experiences shape The Lean Product Playbook?
I’ve always loved working on technology products - specifically, on figuring out how what we’re building creates value for customers. That’s basically what product management is, but being good at it isn’t something you learn in business school. Those are experiential skills.
One of my formative early experiences was at Intuit, where I got exposure to everything that makes up customer value: development, marketing, UX design, all of it. Since then, I’ve worked at many startups on new products in web and mobile, gradually developing my own set of "best practices” by continually building skills in each of those areas, and comparing notes with people who are really good at it.
Steve Blank always recommends that people “get the heck out of the building” - in other words, go talk to people. I've found this to be true in my experiences working with startups. I think a lot of people have reached the same conclusion by now: Anyone can come up with ideas, but far fewer people actually get out there and test whether it's what customers want or not. Books like The Lean Startup showed us why; The Lean Product Playbook is my attempt to explain how.
You spend a lot time on defining your target customer. Do you recommend a data-driven approach? Or could coming into it with a qualitative approach be more valuable?
I think it’s more the latter, to be honest with you.
In my talks, I sum this up with the Oprah vs. Spock dichotomy: Oprah does the nice, long qualitative interviews with people, and Spock is the purely quantitative person. I think a lot of people fool themselves into thinking that there’s a lot more certainty in quantitative data than there actually is. In terms of market research - as in, this research firm says this is going to be a $10 billion industry or whatever - in many cases, it’s just too high-level to be actionable or truly useful.
That approach is certainly important - you do need to understand the market. But when it comes time to actually design and build a product that appeals to people, market data is virtually useless. For developers, one of the appeals of A/B testing is the idea that you can “prove” things - with data, you can prove that one approach is better, and that therefore your product will be successful. But the reality is, when you’re building a version one- you don’t have the hundreds of thousands of users you need for statistical significance.
That's why when you’re building that first version it really is more of an ”Oprah” exercise - you might be talking to groups of 8-10 people, getting their feedback and iterating from there. You need to understand the problem space first, and from there it’s a process of peeling back the onion - talking to real people and gradually building that feedback into a product that resonates.
What do you see as the most common missteps that founders make in developing their Minimum Viable Product?
I think the biggest issue is that few teams actually take the necessary time to develop unique value propositions. Value propositions are that strategic view defining “how is my product better or different from the competitors?” and this is the fundamental groundwork of how you build and market any successful product. Yet a surprising number of teams don’t do this step.
One reason for this is that teams don’t realize they need to, because no one is experienced enough to have failed specifically for this reason. Another reason is not understanding your competitive landscape. There’s a certain fallacy common among entrepreneurs: the assumption that you are the target customer. If there are a lot of people just like you, this assumption could work out. But if you can’t dissociate your preferences from those of the target customer, then you might have a problem.
In other cases, a reason for falling short in your MVP might be technical challenges- you have this really cool thing you’re trying to do, and when you’re doing ambitious R&D like that, there is a certain degree of uncertainty. And kudos to teams who try to do something very challenging - Cuil is one the examples I mention in my book whose goals from the outset were stacked against Google. And there is always risk in that.
One of the most important concepts in building an MVP is prioritizing the right features. What are some strategies to define your feature set?
The guidance from your value propositions is critical in this step of the process. Say you’ve defined one of your value props as “we’re going to do X better than anyone else out there” - then that feature had better be in your feature set. You’re really testing your differentiators: it needs to contain enough of those factors in it, because that‘s what you’re really testing at the end of the day. And of course, it also needs to contain enough of the "must-haves" that customers understand what it is.
My advice is try to get away with as little as possible if you can. And if you end up cutting too deep, you will definitely hear it from people in that first test you do.
So in feature set - there are two main mistakes you can make: One, you can fail to put in a feature that people view as critical. Conversely, there may be a feature that you thought was important -but then in all your tests, no one really talks about it, no one says they would use it, and it’s clear that they don’t value it. These errors are the reason why I like to break things down into feature chunks. The chunks mitigate risk: this way if you do make a miscalculation in a feature, it’s a small one. Many entrepreneurs tend to view features as more monolithic than they really are.
Finally, an ROI approach to prioritization - even if that’s a subjective opinion about the relative value of features in terms of the resources it takes to achieve them - is also very helpful.
One of the most intriguing takeaways from the book is the idea of features tailored really well to specific “user personas” that make up your target audience. Can you think of any products that have done this really well?
Now, this might not be the best example in the world - but just yesterday one of my clients, a lead engineer, started using Pivotal Tracker. (It’s gotten fairly popular, and I’ve used it quite a bit myself.) It has a couple of keyboard shortcuts that, unless you’re a developer, you wouldn’t otherwise know about. It’s something you wouldn’t necessarily need unless you were doing the specific work that developers do. But they love this feature - and in this case, I think Pivotal did a great job of building those features tailed towards “power users” into their MVP.
These “delighter” features make users feel like you know them, you get what they’re about. Slack is good with this too.
This the same framework that a designer named Aaron Walter wrote about in a book called Designing for Emotion: Functional, reliable, usable and delightful are the core values at play. A lot of people will do only what’s on the left side of this matrix, assuming that since it’s an MVP they don’t need to build a full product. They may just completely punt on basic stuff like making it bug-free and easy to use. On the right, you’re acknowledging that those things aren’t the whole picture: the best products also achieve a level of delight that a great UX designer can bring.
What are some of your personal favorite product development tools?
For wireframing, Balsamiq is a great tool. This comes in handy at that stage when you’ve determined your feature set and it’s time to build a prototype - but the beauty of it is that it's easy enough for anyone to use (yes, even MBAs like myself out there)! Designers might have their power tools, like Photoshop or Illustrator or Sketch, but if anyone out there wants to make a simple visualization of an idea, Balsamiq is a great way to do it.
Invision is another one. It’s similar, but Invision is more for once you have nice, high-fidelity mockups from your designer that they’ve created with their power tools. So they deliver you those JPEGs, you'll upload them into Invision, so any non-technical person can go in there and make comments and requests for changes. It runs in a browser, so it looks like the real deal - which means you can establish a multi-step process for iterations that could include user feedback. It’s very rich, and at this point you haven’t even done any coding yet.
Finally, what are the biggest misconceptions around MVP that founders need to avoid?
Too often, the misconception around MVP is that it’s an excuse to launch with something that’s subpar or incomplete. Some entrepreneurs use MVP as rationalization for taking shortcuts, releasing a product that only has a fraction of functionality, without worrying about reliability, ease of use, delight. If you test a product like that - lo and behold - customers won’t want it. This is where the product - marketing matrix can come in handy: if an MVP test like a landing page can give you quantitative feedback (say, click-throughs) and qualitative (reactions from a few people on the page) - that’s valuable.