Keep learning
Growth leadership
How do you make all four engines work together instead of in isolation?

Design experiments that answer specific questions with minimum time and resources to maximise learning velocity without over-investing in unproven ideas.
.webp)
A minimum viable test is a lean approach to validating an idea or hypothesis before investing significant resources. Rather than building fully, executing completely, or waiting for perfect conditions, you test your core assumption in the simplest possible way. A minimum viable test answers the question "will this work?" with minimal investment, allowing you to learn before deciding to scale.
Minimum viable tests are distinct from pilots or MVPs. A pilot is a full-scale test of a complete solution. An MVP is a functional product with core features. A minimum viable test is the absolute minimal version required to validate a single assumption. It might be as simple as a survey question, a landing page, or a limited manual process.
The power of minimum viable tests comes from speed and cost. A test that costs 1,000 pounds and takes one week teaches you far more than waiting three months to build the full solution, only to discover the assumption was wrong.
Minimum viable tests reduce the cost of learning in B2B growth. Most teams discover what doesn't work through expensive failures: building features customers don't want, investing in marketing channels that don't convert, pursuing customer segments that can't sustain the business. Minimum viable tests surface these truths early when correcting course is cheap.
Tests also build organisational learning discipline. Rather than debating whether an idea will work, you run a test. Rather than relying on opinions, you rely on data. This shifts decision-making from opinion-based to evidence-based, which consistently leads to better decisions.
For a team with limited resources, minimum viable testing is essential. You can't afford to build and launch every idea to full scale. Testing allows you to prioritise which ideas are actually worth building. You can test 10 ideas cheaply, identify the 2-3 most promising ones, and invest in those.
Start with your core assumption. What do you believe will happen if you execute this idea? Write that assumption down in a single sentence. "If we create a webinar about X, 15% of registered attendees will become marketing-qualified leads." That's your hypothesis.
Design the minimum test that would validate or disprove that assumption. Don't build more than you need. If you're testing whether customers want a feature, ask them in a survey rather than building it. If you're testing a new messaging angle, test it with email to a small segment rather than launching a full campaign. If you're testing a new pricing model, offer it to 5 customers manually before building billing infrastructure.
Run the test with a clear decision rule. Before testing, decide what result constitutes success. If 15% of webinar registrants should become MQLs and you only achieve 8%, is that enough to move forward or should you change the approach? Decide this threshold before running the test so results don't bias interpretation.
A software company wanted to add a new collaboration feature they believed customers needed. Rather than building it over two months, they surveyed 30 current customers asking about the feature concept and how much they'd pay for it. Only 7 customers showed strong interest. The company revised the concept based on feedback, re-surveyed, and found stronger interest. This iteration through testing took two weeks and cost nearly nothing compared to building a full feature that might have had low adoption.
A consulting firm was considering launching a new service line. Before investing in hiring and infrastructure, they created a landing page describing the service with a call-to-action to request more information. They drove traffic through organic search and paid ads. Within two weeks, they had 50 inquiries. This validated that market demand existed before they committed to building the service. The landing page test cost less than 5,000 pounds and provided clear evidence of demand.
A SaaS company wanted to test a new enterprise tier at a higher price point. Rather than overhauling their entire pricing, they manually offered the new tier to 5 existing customers, explaining the expanded capabilities. Three customers accepted the new pricing. This manual test validated the concept with minimal risk. Once confidence increased through additional manual tests, they built the tier into their product.
How do you make all four engines work together instead of in isolation?

Build the dashboards and data pipelines that show your growth engines in one view so you can spot bottlenecks and make decisions in minutes, not meetings.

The wrong tools create friction. The right ones multiply your output without adding complexity. These are the tools I recommend for growth teams that move fast.
Analyse last cycle's results across all twelve metrics, identify the highest-leverage improvements, and set priorities that compound into the next period.
Pressure-test your strategy against market shifts, performance data, and team capacity so your direction stays relevant and ambitious.
Jason Fried
Rating
Rating
Rating
Rating
Rating

Short essays that challenge default habits. Focus on product, talk to customers and cut pretend work.
Eric Ries
Rating
Rating
Rating
Rating
Rating

A disciplined approach to experiments. Define hypotheses, design MVPs and learn before you scale.
Random testing wastes time and teaches you nothing. Learn how to collect experiment ideas systematically and prioritise them based on potential impact so you always know what to run next.
Assign full conversion credit to the final touchpoint before purchase to identify which channels close deals but miss earlier influences that started journeys.
Determine whether experiment results reflect real differences or random chance to avoid making expensive decisions based on noise instead of signal.
Build distribution through your personal brand and network where your expertise and story attract customers who trust you before your company.
Maintain an unchanged version in experiments to isolate the impact of your changes and prove causation rather than correlation with external factors.
Measure the month-over-month growth in qualified leads to predict future revenue and catch pipeline problems before they impact revenue three months later.
Connect tools so data flows automatically between systems to eliminate manual entry, keep records current, and enable sophisticated workflows across platforms.
Identify what you do better or differently that competitors can't easily copy to defend margins and win customers consistently over time.
Log emails, calls, and meetings automatically to understand what drives deals forward and coach reps based on actual behaviour rather than guesswork.
Focus effort on the 20% of activities that drive 80% of results, systematically eliminating low-yield work to maximise output per hour invested.
Deploy fast, low-cost experiments to discover scalable acquisition and retention tactics, learning through iteration rather than big bets.
Analyse profit per customer to determine if your business model works at scale before investing heavily in growth and customer acquisition.
Store information in browsers to track user behaviour across visits and enable personalised experiences without requiring login for every interaction.
Track predictable yearly revenue from subscriptions to measure business scale and growth trajectory in B2B SaaS and recurring revenue models.
Choose one metric that best predicts long-term success to align your entire team on what matters and avoid conflicting priorities that dilute focus.
Design experiments that answer specific questions with minimum time and resources to maximise learning velocity without over-investing in unproven ideas.
Estimate the maximum revenue opportunity if you captured 100% market share to size your opportunity and prioritise which markets to enter first.
Calculate the total cost of winning a new customer to evaluate marketing efficiency and ensure sustainable unit economics across all channels.
Track predictable monthly subscription revenue to monitor short-term growth trends and make faster decisions than waiting for annual revenue reports.
Select metrics that reveal whether you're achieving strategic goals to track progress and identify problems before they become expensive to fix.
Group customers by acquisition period to compare behaviour patterns and identify which acquisition channels and time periods produce the best long-term value.