Sprinting towards your MVP
A LOT has been written about agile sprint-planning and building "minimum viable products" over the years. I think those concepts have become so conflated that they're no longer clear and actionable frameworks for startups and engineering teams.
To help cut through some of that noise - I'm listing a few tips that I've gathered from the front-lines of building products at Google and Flatiron School.
In case you're not familiar with the basic, you can read some key concepts here:
One of the original articles that describes MVPs
Tactical guide on scrum and sprint planning from Atlassian
A more strategic take on what goes into sprints and how to set yourself up for success
Defining north star metrics to clearly measure if you're moving the right needles
1. Decide what you're trying to accomplish, then work backwards to define the MVP
Different types of products/companies require vastly different definitions of minimum viable products. For example - I spent a few years building a cloud-managed network caching device at Google. To decide whether to go-ahead with building that product, we ran an experiment to validate the user behavior and demand. Since our product's novelty lied in its caching logic and fast content-serving experience, the team spent 6 months building just those key functionalities on store-bought hardware, and simulated or installed publicly available software for everything else.
By hardware standards, 6 months is an extremely short time for building an MVP. In the SaaS world, a small team could have iterated through several versions of a product by then!
So - when you're trying to build an MVP, make sure you've defined
the goal of creating this MVP (usability? price sensitivity? etc)
the key concepts or functionalities that you're trying to test
how much of those functionalities need to be built by your engineering team
2. Structure sprints by "Critical User Journeys"
Once you've defined your MVP, you now want to break that engineering work into sprints. Each sprint should be structured to enable incremental testing and validation. You don't want to wait until the last minute to put all the pieces together. I can guarantee you something won't work properly if you do it that way.
Each sprint should deliver a clear set of functionalities or user experiences. Those functionalities should be incremental so that they're easier to build and test. In the scrum world, this tactic is called elephant carpaccio (hey that name is memorable). From a product perspective, you want to incrementally build for "critical user journeys".
The right way to define CUJ: the features connect to enable an important usage path or user experience. The internal team is then able to test and give early feedback on that experience Food ordering example - a user searches on Yelp, clicks "Order Takeout" on the restaurant tile in the search result, lands in the menu, selects the food, adds credit card on the order screen, then clicks "OK". In the sprint, the team added the "order takeout" button and "add credit card" module so that the entire ordering user flow is now connected.
The wrong way to define CUJ: the features may be clustered into one feature space, but they don't map to how users would actually navigate your product Food ordering example again - this time, your team added a fully-featured credit card screen. But, this means a new user would needed to abandon their order flow in order to setup their credit card, then go back and start the order from the beginning.
3. Define on what "success" means, then don't change it
I've seen this often, and have fallen victim to it myself as well. In fact - this behavior has been faulted for causing latest "reproducibility" crisis in academic research. Google even runs internal courses on statistical analysis that teaches employees how to avoid these types of experimental biases.
Before you take that MVP to customers, write down the parameters and possible next steps
How many users (or data samples) do you need?
What type of result is considered successful?
What are your set of next steps, in anticipation of different results?
We often don't hit clear home-runs with our products on the first try, so then what? Well here's what not to do. Some people fall into indecision - dragging their feet, extending the experiment in the hopes that the numbers will get better. Others start slicing and dicing the data, hoping to find a nugget of gold somewhere that they didn't expect. These tactics may eventually lead to great-looking numbers, but it doesn't mean your experiment was a success. Nate Silvers even built a great demo explaining why this doesn't work.
Another way to say it is:
Just because a million monkeys tapping on a million typewriters one day produced a manuscript for Hamlet, doesn't mean all monkeys can write Shakespeare.
One thing you can do is with these experiment data "nuggets of gold" is to hone in on actual user needs, and start re-aiming your product to address those needs. The email service Superhuman wrote a great article about finding product-market fit with this method.
So, my three pieces of practical advice on sprinting towards your MVP is to
First define what you're trying to accomplish with that MVP
Build in increments of "user journeys" to validate internally earlier
Look for opportunities in the experiment results but don't fudge the math!