MVP confusion

Note: I started drafting this blog post a long time ago and thought I’d finish it off. A lot has been written about MVP since then. Here’s my 2 pence worth.

When I joined GDS, working on GOV.UK the use of the term “MVP” was commonplace — “Can we leave that out of the MVP?” “do we need to allow people to manage the ordering of items on the organisation homepage?”

Something bugged me about these conversations and I couldn’t quite put my finger on it. The principle of avoiding scope creep and gold-plating was there, but the emphasis of the conversations were always around “what features could we leave out”. Somehow it didn’t quite have the same focus as XP’s principle to “do the simplest thing that could possibly work“. In the latter, there’s an emphasis on doing things to a high level of quality, but without unneccesary moving parts. There was a focus on MVP but I was anxious that people weren’t really taking into account the longer term consequences of decisions on the maintainability or operability of the software we were designing.

Over the years I did some digging into the term and it’s origins. Mostly those seem to reference Eric Reis’s work on Lean Startup. Interestingly Eric highlights the most glaring problem with term in a 2009 blog post “Minimum Viable Product: a guide“:

MVP, despite the name, is not about creating minimal products.

and yet many people, myself included, get easily drawn into focusing on this “minimal” word.

So, what is MVP about?

Eric explains that MVP is about enabling a team to “collect the maximum amount of validated learning about customers with the least effort.”

It’s about maximizing learning about customer needs.

How can we learn cheaply? For me the optimal learning requires:

  1. knowing which hypotheses have or have not been validated
  2. of the unvalidated hypotheses, knowing which have the most bearing on potential business opportunities. This analysis has to take into account both the size of the opportunity and the cost for your organisation to meet it (ie. do you have a strategic business advantage in that market segment)
  3. of those, what is the cost of conducting a testing or experiment to validate them

Answering each of those questions above requires discipline. Getting the required level of rigour around a scientific learning approach requires determination.

Origins of the term “minimum viable product”

The thing I find most fascinating about the history of the term, and which I haven’t heard many folks talk about, is that it originated in the marketing world, in the context of “Synchronous product and customer development”. For many software engineers, the term “customer development” is puzzling. The idea that a market for an innovative product may not exist and needs to be cultivated, nurtured, grown and encouraged through and investment of skill and effort, well.. it’s not in the normal world of the kind of problems that engineers face.

When viewed from the context of a world where customer development is a major activity, the idea of a “mininmum viable product”, developed synchronously and iteratively with the constituency of prospective customers makes much more sense.

Given how much confusion the term MVP causes, I wish we could use something else, or perhaps not use it at all.

Instead let’s ask ourselves every day, how can we learn effectively but cheaply about the needs of our users and customers?