Minimum viable test

Design experiments that answer specific questions with minimum time and resources to maximise learning velocity without over-investing in unproven ideas.

Minimum viable test

Minimum viable test

definition

Introduction

A minimum viable test is a lean approach to validating an idea or hypothesis before investing significant resources. Rather than building fully, executing completely, or waiting for perfect conditions, you test your core assumption in the simplest possible way. A minimum viable test answers the question "will this work?" with minimal investment, allowing you to learn before deciding to scale.

Minimum viable tests are distinct from pilots or MVPs. A pilot is a full-scale test of a complete solution. An MVP is a functional product with core features. A minimum viable test is the absolute minimal version required to validate a single assumption. It might be as simple as a survey question, a landing page, or a limited manual process.

Examples of minimum viable tests

  • Survey questions: ask 20 customers if they'd buy a feature before building it
  • Landing pages: create a page describing a new solution and measure interest before developing it
  • Manual processes: execute a solution manually with 10 customers before automating
  • Email tests: send an email message to a small list to test messaging effectiveness
  • Pricing tests: offer a new pricing tier to a small customer segment to validate willingness to pay

The power of minimum viable tests comes from speed and cost. A test that costs 1,000 pounds and takes one week teaches you far more than waiting three months to build the full solution, only to discover the assumption was wrong.

Why it matters

Minimum viable tests reduce the cost of learning in B2B growth. Most teams discover what doesn't work through expensive failures: building features customers don't want, investing in marketing channels that don't convert, pursuing customer segments that can't sustain the business. Minimum viable tests surface these truths early when correcting course is cheap.

Tests also build organisational learning discipline. Rather than debating whether an idea will work, you run a test. Rather than relying on opinions, you rely on data. This shifts decision-making from opinion-based to evidence-based, which consistently leads to better decisions.

For a team with limited resources, minimum viable testing is essential. You can't afford to build and launch every idea to full scale. Testing allows you to prioritise which ideas are actually worth building. You can test 10 ideas cheaply, identify the 2-3 most promising ones, and invest in those.

How to apply it

Start with your core assumption. What do you believe will happen if you execute this idea? Write that assumption down in a single sentence. "If we create a webinar about X, 15% of registered attendees will become marketing-qualified leads." That's your hypothesis.

Design the minimum test that would validate or disprove that assumption. Don't build more than you need. If you're testing whether customers want a feature, ask them in a survey rather than building it. If you're testing a new messaging angle, test it with email to a small segment rather than launching a full campaign. If you're testing a new pricing model, offer it to 5 customers manually before building billing infrastructure.

Run the test with a clear decision rule. Before testing, decide what result constitutes success. If 15% of webinar registrants should become MQLs and you only achieve 8%, is that enough to move forward or should you change the approach? Decide this threshold before running the test so results don't bias interpretation.

Feature validation through survey

A software company wanted to add a new collaboration feature they believed customers needed. Rather than building it over two months, they surveyed 30 current customers asking about the feature concept and how much they'd pay for it. Only 7 customers showed strong interest. The company revised the concept based on feedback, re-surveyed, and found stronger interest. This iteration through testing took two weeks and cost nearly nothing compared to building a full feature that might have had low adoption.

Market validation through landing page

A consulting firm was considering launching a new service line. Before investing in hiring and infrastructure, they created a landing page describing the service with a call-to-action to request more information. They drove traffic through organic search and paid ads. Within two weeks, they had 50 inquiries. This validated that market demand existed before they committed to building the service. The landing page test cost less than 5,000 pounds and provided clear evidence of demand.

Pricing hypothesis validation

A SaaS company wanted to test a new enterprise tier at a higher price point. Rather than overhauling their entire pricing, they manually offered the new tier to 5 existing customers, explaining the expanded capabilities. Three customers accepted the new pricing. This manual test validated the concept with minimal risk. Once confidence increased through additional manual tests, they built the tier into their product.

Keep learning

Growth leadership

How do you make all four engines work together instead of in isolation?

Explore playbooks

Data & dashboards

Data & dashboards

Build the dashboards and data pipelines that show your growth engines in one view so you can spot bottlenecks and make decisions in minutes, not meetings.

Growth team tools

Growth team tools

The wrong tools create friction. The right ones multiply your output without adding complexity. These are the tools I recommend for growth teams that move fast.

Review and plan next cycle

Review and plan next cycle

Analyse last cycle's results across all twelve metrics, identify the highest-leverage improvements, and set priorities that compound into the next period.

Revisit quarterly

Revisit quarterly

Pressure-test your strategy against market shifts, performance data, and team capacity so your direction stays relevant and ambitious.

Related books

Rework

Jason Fried

Rating

Rating

Rating

Rating

Rating

Rework

Short essays that challenge default habits. Focus on product, talk to customers and cut pretend work.

Lean Startup

Eric Ries

Rating

Rating

Rating

Rating

Rating

Lean Startup

A disciplined approach to experiments. Define hypotheses, design MVPs and learn before you scale.

Related chapters

1

Building your backlog

Random testing wastes time and teaches you nothing. Learn how to collect experiment ideas systematically and prioritise them based on potential impact so you always know what to run next.

Wiki

Last-touch attribution

Assign full conversion credit to the final touchpoint before purchase to identify which channels close deals but miss earlier influences that started journeys.

Statistical significance

Determine whether experiment results reflect real differences or random chance to avoid making expensive decisions based on noise instead of signal.

Founder-led growth

Build distribution through your personal brand and network where your expertise and story attract customers who trust you before your company.

Control group

Maintain an unchanged version in experiments to isolate the impact of your changes and prove causation rather than correlation with external factors.

Lead velocity rate

Measure the month-over-month growth in qualified leads to predict future revenue and catch pipeline problems before they impact revenue three months later.

Integration

Connect tools so data flows automatically between systems to eliminate manual entry, keep records current, and enable sophisticated workflows across platforms.

Competitive advantage

Identify what you do better or differently that competitors can't easily copy to defend margins and win customers consistently over time.

Activity tracking

Log emails, calls, and meetings automatically to understand what drives deals forward and coach reps based on actual behaviour rather than guesswork.

Pareto Principle

Focus effort on the 20% of activities that drive 80% of results, systematically eliminating low-yield work to maximise output per hour invested.

Growth hacking

Deploy fast, low-cost experiments to discover scalable acquisition and retention tactics, learning through iteration rather than big bets.

Unit economics

Analyse profit per customer to determine if your business model works at scale before investing heavily in growth and customer acquisition.

Cookie

Store information in browsers to track user behaviour across visits and enable personalised experiences without requiring login for every interaction.

Annual Recurring Revenue (ARR)

Track predictable yearly revenue from subscriptions to measure business scale and growth trajectory in B2B SaaS and recurring revenue models.

North Star Metric

Choose one metric that best predicts long-term success to align your entire team on what matters and avoid conflicting priorities that dilute focus.

Minimum viable test

Design experiments that answer specific questions with minimum time and resources to maximise learning velocity without over-investing in unproven ideas.

Total Addressable Market (TAM)

Estimate the maximum revenue opportunity if you captured 100% market share to size your opportunity and prioritise which markets to enter first.

Customer Acquisition Cost (CAC)

Calculate the total cost of winning a new customer to evaluate marketing efficiency and ensure sustainable unit economics across all channels.

Monthly Recurring Revenue (MRR)

Track predictable monthly subscription revenue to monitor short-term growth trends and make faster decisions than waiting for annual revenue reports.

Key Performance Indicator (KPI)

Select metrics that reveal whether you're achieving strategic goals to track progress and identify problems before they become expensive to fix.

Cohort analysis

Group customers by acquisition period to compare behaviour patterns and identify which acquisition channels and time periods produce the best long-term value.