Here is something that happens with impressive regularity in the startup world.
A founder has an idea. A good one. They build an MVP — spending real money, real time, real emotional energy — and then they launch it to the world, which in practice means posting it on LinkedIn and sending it to their mum. They wait. Nothing particularly useful happens. They conclude either that the idea doesn't work, or that the idea definitely works and the world just isn't ready. Neither conclusion is especially well-evidenced.
The problem isn't the MVP. The problem is that nobody told them testing an MVP is a process, not an event. It requires deliberate structure, specific goals, real users who aren't related to you, and a clear understanding of what question you're actually trying to answer.
This article is that structure. From what you're testing for, to how to find the people to test with, to the tools that make the whole thing readable — in the right order, with the right level of honesty about which parts are actually hard.
What Is the Purpose of Testing Your MVP?
Let's start here, because it's where most founders go slightly wrong.
The purpose of testing your MVP is not to find out if people like it.
People will tell you they like it. People are polite. Your early users will say encouraging things because encouraging things are what you say to someone who has clearly worked hard on something and is clearly nervous about it. This feedback feels good. It is not, in any reliable sense, data.
The purpose of testing your MVP is to answer one very specific question: will people actually pay for this?
Not "would you use this?" Not "do you think this is a good idea?" Not "would you recommend this to a friend?" Those are all proxies. The only question that validates a business is whether money changes hands — and testing your MVP means designing every experiment around getting closer to that moment.
Everything else — engagement metrics, qualitative feedback, feature requests, NPS scores — is useful context. But paying customers are proof. Everything else is encouragement.
Keep that hierarchy in mind as you build your testing plan.

Does an MVP Test the Product or the Business Model?
This question trips up more founders than it should, so let's answer it clearly: both, in sequence.
The product hypothesis and the business model hypothesis are different things, and conflating them produces muddled results.
The product hypothesis is: does this solve a real problem in a way that users find genuinely useful? This is about whether the thing works — whether users can navigate it, whether it does what it's supposed to do, whether it makes someone's life meaningfully better in the way you predicted.
The business model hypothesis is: will people pay for this, at a price that makes the business viable? This is about whether the thing is commercially valuable — whether the problem is painful enough that payment feels justified, whether the price point works, whether the acquisition cost allows for a sustainable margin.
You test the product hypothesis first. If the product doesn't work — if users are confused, frustrated, or indifferent — there's no point stress-testing the pricing. Fix the product. Then test the business model.
In practice, the cleanest sequence looks like this: early unstructured interviews → small-scale product testing with real users → payment validation (asking for money) → retention signal (do they keep using it?) → referral signal (do they tell someone else?). Each step answers a more commercially loaded question than the last.
The mistake is trying to answer all of them at once, with the same small cohort, reading everything as confirmation that you're right.
Before You Test Anything: Define What Success Looks Like
This is the unsexy bit that makes everything else work.
Before your MVP is in anyone's hands, write down — with actual numbers — what success looks like at each stage of testing. Not "lots of positive feedback." Not "good engagement." Numbers.
- "10 out of 20 beta users complete the core workflow within 10 minutes."
- "5 out of 20 beta users sign up for a paid plan without being asked to."
- "Average session length above 8 minutes in the first week."
The reason this matters: without pre-defined success criteria, you'll instinctively interpret ambiguous results as positive. This is not a character flaw — it's how human beings work. We're extraordinarily good at seeing what we want to see in data that doesn't clearly say anything. Pre-defined metrics remove that option.
Write them before you launch. Then read them after.
How to Acquire MVP Test Customers
Friends, colleagues, and family are not test customers. They're fans. They're biased in your favour by virtue of knowing and caring about you. Their feedback is compromised in the same way that a restaurant critic's review is compromised when the chef is their sibling. Useful for morale. Not useful for conclusions.
Real MVP test customers are people who have the problem you're solving, who don't know you personally, and who have no emotional stake in whether your product succeeds.
Here's where to actually find them:
Communities where your problem lives.
Reddit, Facebook groups, Slack communities, Discord servers, LinkedIn groups — wherever your target user congregates to talk about the thing your product solves. Don't pitch. Contribute first. Understand the conversations that are already happening. Then, when you have something worth showing, ask directly: "I've built something that addresses this — would five people be willing to spend 30 minutes telling me whether it solves it?"
This framing works because you're asking for feedback, not asking for praise. People who care about a problem are almost always willing to give 30 minutes to someone genuinely trying to solve it.
LinkedIn outreach — specific, not broadcast.
A targeted message to 50 people who match your ICP precisely beats a post to your general network by a significant margin. Be direct about what you've built, why you think it's relevant to them specifically, and what you're asking for (their time, not their money — yet). Conversion rates on honest, specific outreach are meaningfully higher than most founders expect.
Your competitors' unhappy customers.
Reviews on G2, Trustpilot, Capterra, and similar platforms are a goldmine. Someone leaving a negative review of an existing solution in your space is, by definition, someone with your problem who isn't satisfied with the current answer. These are your most motivated potential test users. Find them. Talk to them.
Paid acquisition — small, targeted, intentional.
A focused £200-£500 ad campaign targeting a precisely defined audience isn't a marketing exercise at MVP stage — it's a testing tool. If you can't get clicks from people who match your ICP when you're directly describing the problem you solve, that's useful information. If you can, you've found a scalable acquisition channel to revisit later.
One rule above all others:
Aim for a minimum of 20 unaffiliated test users before drawing any conclusions. Below that number, your results are anecdote, not signal.
How to Test Whether MVP Customers Will Buy
The most common mistake in MVP testing is asking people if they would pay for something, rather than asking them to pay for it.
"Would you pay £X/month for this?" gets you a theoretical answer from someone who is probably trying to be helpful and definitely not thinking carefully about whether they'd actually spend money. The answer is almost always yes — people consistently overestimate their willingness to pay when there's no actual transaction involved.
The only reliable test of willingness to pay is a real transaction.
This doesn't have to mean a fully built product with a payment system. It means creating a moment where money changes hands — or explicitly doesn't — in a real context. Here's what that can look like:
The pre-order.
A landing page that describes your product and asks for a card number. Not to charge immediately, but to signal genuine intent. Dropbox famously validated their idea with a video before a line of code was written. People who enter payment details are categorically different from people who say "yes, I'd probably pay for that."
The fake door.
Put a pricing page on your MVP. Make it look real. When a user clicks "Buy" or "Upgrade," you show them a message saying the feature isn't available yet but you'll notify them when it is. Track who clicked. Count those numbers. People who reached for their wallet, even on something that wasn't ready, are telling you something important.
Charge real money from day one.
Controversial, but often the right call. Many founders delay charging because they're afraid of rejection and frame that fear as "we need to refine it first." Charging early creates urgency, filters for serious users, and gives you a fundamentally different quality of feedback than free users provide. Someone paying £49/month has an entirely different relationship with your product than someone using it for free.
The manual first.
Deliver the service manually before you've automated it. Charge for it. If people pay for the manual version — where a human is doing by hand what the software will eventually do — you've validated the business model. Then you build the automation.
What you're looking for in all of these isn't a 100% conversion rate. You're looking for a signal — for some meaningful proportion of your target users to reach for their wallet without prompting. Ten paying customers, acquired without coercion, tells you more than a hundred free users who said nice things.
What Are the Tools Helpful for Reliable MVP Testing?
The good news: the toolkit for MVP testing is genuinely excellent and mostly inexpensive. The trap is using all of it and drowning in data that you don't have a framework to interpret.
Use the tools selectively, in service of specific questions. Here's a practical stack:
For understanding behaviour: Hotjar or Microsoft Clarity
Both show you session recordings — exactly what real users do when they're in your product. Where they click, where they hesitate, where they give up. Free at entry level. Genuinely one of the most valuable tools available to an early-stage founder because watching someone use your product for the first time is humbling in a way that no amount of quantitative data can replicate.
For measuring what matters: Mixpanel or PostHog
Where basic analytics like Google Analytics tells you how many people visited, Mixpanel and PostHog tell you what they did. Did they complete the core user journey? Where did they drop off? What did they do immediately before churning? PostHog is open-source and generous on the free tier. Both are significantly more useful than raw traffic numbers for understanding whether your MVP is working.
For talking to users: Calendly + a simple interview script
Structured user interviews remain the highest signal-to-noise feedback source available. Book 30-minute calls. Ask open questions. What were you hoping this would do? What surprised you? What did you expect to happen when you clicked X? Don't defend the product. Don't explain what they should have done. Just listen and take notes. Ten well-conducted user interviews will surface more actionable insight than a survey sent to 500 people.
For collecting structured feedback: Typeform or Tally
Post-signup and post-trial surveys, kept short (three to five questions maximum), with at least one open text field. The quantitative responses give you patterns. The open text fields give you language — the actual words your users use to describe their problem and your solution, which is worth its weight in gold for everything from positioning to sales copy.
For testing payment willingness: Stripe
Set up real payment flows from day one, even if you're offering a free trial. Stripe's analytics give you conversion data, trial-to-paid rates, and churn — which are the metrics that actually describe a business rather than a product.
For A/B testing: Google Optimize (or just manual variation)
Once you have enough traffic to make statistical significance achievable — usually above 500 monthly users — A/B testing different flows, headlines, or pricing structures gives you controlled experiments. Before that threshold, it's easier to just change something, observe for two weeks, and draw reasonable inferences.
The most important tool of all, and the one nobody sells: a notebook where you write down your hypotheses before you look at the data, so you can't retroactively shape the conclusions.
The Testing Loop: How It Actually Works in Practice
MVP testing isn't a single event with a pass/fail outcome. It's a loop — and the faster you run it, the more useful information you accumulate.
The loop looks like this:
- Hypothesise → "We believe that [specific user type] will [specific behaviour] because [specific reason]."
- Measure → Define the metric that would confirm or refute the hypothesis. Launch to a small cohort. Collect clean data.
- Learn → What happened? Did the hypothesis hold? If not, where specifically did it break?
- Act → Either the hypothesis is confirmed and you double down, or it's refuted and you need to change something: the feature, the user type, the messaging, the price, the problem you're solving, or the fundamental model.
Then you run it again.
The founders who get the most from MVP testing aren't the ones who collect the most data. They're the ones who ask the sharpest questions, test them cleanly, and update their views quickly when the evidence says they should.
Most importantly: a failed hypothesis is not a failed product. It's information. The startup graveyard is full of companies that refused to update their views when the data was clearly telling them to. The companies that survive are the ones that treat being wrong as a useful result rather than a verdict.
What Good Results Look Like (And What to Do When You Don't Get Them)
A successful MVP test doesn't mean everyone loved it. It means you got a clear answer to a specific question.
Positive signals worth acting on:
Users completing the core workflow without needing help, unprompted return visits, users asking when feature X is coming (shows investment in the product's future), payment conversion above a meaningful threshold, and referrals — someone telling someone else about your product without being asked.
Signals that something needs to change:
High drop-off at a specific step (product problem), low conversion from trial to paid despite positive qualitative feedback (pricing or positioning problem), high initial sign-up but low return visits (the problem isn't painful enough, or the solution doesn't solve it well enough), and — the one that's hard to hear — users who are polite but never come back.
If you're getting weak signals, resist the temptation to explain them away. The most useful question is always: what would have to be true for this result to make sense? Follow that question honestly, and it will usually point you at the real problem.
A Note From Us
At Octogle, we build MVPs — and we care about whether they work after we hand them over.
That means we don't just build what you brief us. We challenge the scope, ask what hypothesis you're testing, and make sure what gets built is designed to produce signal rather than noise. We've built enough products to know that the most expensive outcome isn't a costly build. It's a build that doesn't generate the information you needed to move forward.
If you're at the stage of planning an MVP and thinking about how to test it — or if you've got an MVP and the testing isn't giving you useful answers — come and talk to us. We're useful in both conversations.




.webp)
