Keep learning
Growth leadership
How do you make all four engines work together instead of in isolation?

Design experiments that answer specific questions with minimum time and resources to maximise learning velocity without over-investing in unproven ideas.
.webp)
A minimum viable test is a lean approach to validating an idea or hypothesis before investing significant resources. Rather than building fully, executing completely, or waiting for perfect conditions, you test your core assumption in the simplest possible way. A minimum viable test answers the question "will this work?" with minimal investment, allowing you to learn before deciding to scale.
Minimum viable tests are distinct from pilots or MVPs. A pilot is a full-scale test of a complete solution. An MVP is a functional product with core features. A minimum viable test is the absolute minimal version required to validate a single assumption. It might be as simple as a survey question, a landing page, or a limited manual process.
The power of minimum viable tests comes from speed and cost. A test that costs 1,000 pounds and takes one week teaches you far more than waiting three months to build the full solution, only to discover the assumption was wrong.
Minimum viable tests reduce the cost of learning in B2B growth. Most teams discover what doesn't work through expensive failures: building features customers don't want, investing in marketing channels that don't convert, pursuing customer segments that can't sustain the business. Minimum viable tests surface these truths early when correcting course is cheap.
Tests also build organisational learning discipline. Rather than debating whether an idea will work, you run a test. Rather than relying on opinions, you rely on data. This shifts decision-making from opinion-based to evidence-based, which consistently leads to better decisions.
For a team with limited resources, minimum viable testing is essential. You can't afford to build and launch every idea to full scale. Testing allows you to prioritise which ideas are actually worth building. You can test 10 ideas cheaply, identify the 2-3 most promising ones, and invest in those.
Start with your core assumption. What do you believe will happen if you execute this idea? Write that assumption down in a single sentence. "If we create a webinar about X, 15% of registered attendees will become marketing-qualified leads." That's your hypothesis.
Design the minimum test that would validate or disprove that assumption. Don't build more than you need. If you're testing whether customers want a feature, ask them in a survey rather than building it. If you're testing a new messaging angle, test it with email to a small segment rather than launching a full campaign. If you're testing a new pricing model, offer it to 5 customers manually before building billing infrastructure.
Run the test with a clear decision rule. Before testing, decide what result constitutes success. If 15% of webinar registrants should become MQLs and you only achieve 8%, is that enough to move forward or should you change the approach? Decide this threshold before running the test so results don't bias interpretation.
A software company wanted to add a new collaboration feature they believed customers needed. Rather than building it over two months, they surveyed 30 current customers asking about the feature concept and how much they'd pay for it. Only 7 customers showed strong interest. The company revised the concept based on feedback, re-surveyed, and found stronger interest. This iteration through testing took two weeks and cost nearly nothing compared to building a full feature that might have had low adoption.
A consulting firm was considering launching a new service line. Before investing in hiring and infrastructure, they created a landing page describing the service with a call-to-action to request more information. They drove traffic through organic search and paid ads. Within two weeks, they had 50 inquiries. This validated that market demand existed before they committed to building the service. The landing page test cost less than 5,000 pounds and provided clear evidence of demand.
A SaaS company wanted to test a new enterprise tier at a higher price point. Rather than overhauling their entire pricing, they manually offered the new tier to 5 existing customers, explaining the expanded capabilities. Three customers accepted the new pricing. This manual test validated the concept with minimal risk. Once confidence increased through additional manual tests, they built the tier into their product.
How do you make all four engines work together instead of in isolation?

Build the dashboards and data pipelines that show your growth engines in one view so you can spot bottlenecks and make decisions in minutes, not meetings.

The wrong tools create friction. The right ones multiply your output without adding complexity. These are the tools I recommend for growth teams that move fast.
Analyse last cycle's results across all twelve metrics, identify the highest-leverage improvements, and set priorities that compound into the next period.
Pressure-test your strategy against market shifts, performance data, and team capacity so your direction stays relevant and ambitious.
Jason Fried
Rating
Rating
Rating
Rating
Rating

Short essays that challenge default habits. Focus on product, talk to customers and cut pretend work.
Eric Ries
Rating
Rating
Rating
Rating
Rating

A disciplined approach to experiments. Define hypotheses, design MVPs and learn before you scale.
Random testing wastes time and teaches you nothing. Learn how to collect experiment ideas systematically and prioritise them based on potential impact so you always know what to run next.
Deploy fast, low-cost experiments to discover scalable acquisition and retention tactics, learning through iteration rather than big bets.
Organise the tools that capture leads, nurture prospects, and measure performance to automate repetitive work and connect customer data across systems.
Choose one metric that best predicts long-term success to align your entire team on what matters and avoid conflicting priorities that dilute focus.
Attract prospects through valuable content that solves real problems, building trust and generating qualified leads who approach you.
Unify customer data from every touchpoint to create complete profiles that power personalised experiences across marketing, sales, and product.
Define how you're different from alternatives in a way that matters to customers to guide all messaging and ensure consistent market perception.
Automate multi-touch email campaigns that adapt based on recipient behaviour to nurture leads consistently without manual follow-up from reps or marketers.
Document your ideal customer's role, goals, and challenges to tailor messaging and prioritise features that solve real problems they actually pay for.
Calculate the total cost of winning a new customer to evaluate marketing efficiency and ensure sustainable unit economics across all channels.
Cultivate belief that skills and results improve through deliberate effort, treating setbacks as learning opportunities rather than fixed limitations.
Track your user journey through Acquisition, Activation, Retention, Referral, and Revenue to identify which stage constrains growth most.
Design experiments that answer specific questions with minimum time and resources to maximise learning velocity without over-investing in unproven ideas.
Systematically rank projects and opportunities using objective frameworks, ensuring scarce resources flow to highest-impact work.
Document your repeatable processes in clear, step-by-step instructions that ensure consistency, enable delegation, and capture institutional knowledge.
Select metrics that reveal whether you're achieving strategic goals to track progress and identify problems before they become expensive to fix.
Distribute conversion credit across multiple touchpoints to recognise that customer journeys involve many interactions and channels working together.
Calculate your true growth trajectory by measuring the rate at which your business grows when gains build on previous gains over multiple periods.
Track how fast your pipeline of ready-to-buy leads grows to forecast sales capacity needs and spot when lead quality or sales efficiency changes.
Diagnose and break through stagnation by identifying which business mechanisms have reached capacity and require new approaches.
Assemble tools that manage pipeline, automate outreach, and track performance to help reps sell more efficiently and managers forecast accurately.