How to Pilot Your Minimum Viable Product
Things will break and customers will get mad, but that’s all part of the process
Here’s the catch-22 about experimentation: It kills your revenue, your customer reputation, your employee morale, and your business rhythm. But if you don’t experiment, all those things will wither and die anyway.
A minimum viable product (MVP), as the name implies, is the experimental version of your product, one that is just functional enough to provide value to the customer. When built right, the core of your MVP — the functionality where the customer finds value — is the only robust part. The rest is sneakernet, vaporware, and duct tape — a bunch of manual processes designed to substitute for automated processes before you spend a bunch of money on automation.
Now, if you launch your MVP without adequate preparation, those manual processes can dislodge and sabotage your launch, like a loose bolt in a rocket, ultimately dooming your MVP to failure.
If you want to reduce the risk of a pre-orbit MVP explosion, you first need to run a proper pilot.
An MVP pilot dramatically reduces the risk of experimentation
A number of experiments go into an MVP:
- You’re testing assumptions that the core value you’re providing to the customer justifies the costs to scale.
- You’re testing the hypothesis that said value can be delivered efficiently by the system you’ve designed.
- You’re making educated guesses that the automation you need can be built and maintained at a cost that returns decent margins.
Your MVP pilot should conduct these experiments. The results should show you where you’re bleeding revenue, destroying your customer rep, knocking over your employees, and disrupting your rhythm.
Here’s how you do that.
Before you pilot, warn your team
I don’t have to tell you that any launch can be stressful and full of failure. You need to expect the unexpected — your customers, suppliers, partners, even your employees will all do things you never would have imagined they’d do.
But I need to tell you there’s no guarantee that the fix you come up with will stick the first time, or the second, or the seventh.
Furthermore, your MVP will get shitty feedback. You will probably fail a few customers, and at least one of them won’t have any empathy or understanding for what you’re trying to accomplish.
It’s gonna get heated.
Warn your team that things will break and customers will get mad. Let them know that the pilot is going to suck, but it’ll be temporary.
Declare a central playbook
A pilot moves at lightning speed. Before you go live, everyone on your team, including outside partners and suppliers, should know how to do their job. After you go live, everyone should know what to do when their job changes.
Create a centralized repository with all the information your team will need to be successful. Keep it simple, make an index so it’s easy for everyone to find the information they need, and keep it updated.
Centralize your communications as well. Nothing leads to severe mistakes quicker than splintered conversations with key people who were kept out until it’s too late.
I also recommend a plan of attack—another centralized place for communicating learnings and changes. While the playbook describes how people do their job, the plan of attack announces shifts and changes to that playbook. Everyone should read the plan of attack at the beginning of every day and keep up with the centralized communications as the day goes on.
Ask for input and give ownership
Now that your team is prepared and has the tools they need, let’s get the best out of that team.
Ask everyone for their input before and during the pilot:
- What do they see that’s working and isn’t working?
- What are they doing that’s easy, and what’s difficult?
- Which parts of the process make sense, and which don’t?
- What changes would they make?
Then, give them ownership — the leeway to make changes, workarounds, and fixes on their own.
Because if there’s one thing that’ll knock you over, it’s the million questions you’ll get and the million decisions you’ll need to make. Something like 90% of these will be filtered through and to your team. Imagine if they have the authority to answer some of those questions and make some of those decisions on their own.
Let them make mistakes. But whatever they do, make sure they communicate what they’ve done.
Measure everything, and make decisive changes
I’m a data superfreak. I like to know when things happen, what happened, and how much of it happened. Then I like to detect patterns and test assumptions and draw conclusions. After that, I make decisions.
Collect data at every point of the pilot, even if it’s just listing stuff in a spreadsheet. Keep a constant eye on what the data is telling you, how it changes, and why it changes. Do things to proactively make the data change, and see if those changes stick.
Your success will come from doubling down on trends and patterns, not on pre-pilot expectations.
Your success will come from doubling down on trends and patterns, not on pre-pilot expectations. You’ll need to figure out when you need to stop doing stuff you’re doing wrong and when to start doing stuff to make things right.
When you make changes, make them quickly and decisively. Declare thresholds for action ahead of time. Measure everything twice, and cut and add with authority.
Be flexible with your pricing
Maybe you’ve already settled on your pricing. Maybe you’ve gone through all the research and competitive analysis, you’ve surveyed your customers and your potential customers, and you have a number, or at least a range.
That’s great, but there’s no way to know if your pricing is correct until customers actually start paying you. A pilot is a great way to get answers without committing to prelaunch assumptions that might be a little or a lot wrong.
What I like to do during the pilot is figure out my minimum and maximum pricing — the least I can sell for and still expect to make a profit and the most I think people will pay for the value I’m providing — and play around in that range.
I use prelaunch pricing during the pilot, offsetting what early adopters are paying in return for the headaches they’ll inevitably put up with during this phase. Pilot customers won’t be paying full price, and they’ll be aware of that.
But I don’t necessarily need to tell them what the full price is, and I don’t need to advertise the amazing(!) huge(!) discount(!) they’re going to get. I want them to understand that the price they’re paying is real and close to full price, because the further away from full price they think they’re getting, the less they’ll act like real customers.
Then, if I want to acquire a bunch of customers at the beginning, I flex down close to my minimum price. If I want to test value, I flex up closer to my maximum price.
Always do right by the customer
Reiterate to your team every single chance you get that when in doubt, they should always do what’s best for the customer.
When something inevitably goes wrong, fix the problem for the customer first, and then fix the system later. Your goal is that the customer should never pay for your experiments, learnings, and mistakes. When they do wind up paying, make them whole and make sure it doesn’t happen again. Then, keep your eye on the problem, because, like I said, your first fix may not work.
Remove loose links in the system immediately. If someone is falling over, get them out of the way, temporarily or permanently. If the problem is software or some other system, replace it with something else. (Remember, everything should be mostly manual in a pilot.) If the problem is a part of the core value, immediately reconsider that value and make your decisions accordingly.
You’ll either hit the next phase and launch a solid MVP, or you’ll launch nothing. If the latter happens, at least you’ll have learned the right lessons, which will give you ammo for your next experiment. You might even pull learnings out of a pilot failure that help the business in ways you never thought of.
A pilot will let you do all that before you’ve saturated the market with an MVP that’s doomed to fail.