Keep learning
Growth leadership
How do you make all four engines work together instead of in isolation?

Structure experiments around clear predictions to focus efforts on learning rather than random changes and make results easier to interpret afterward.
.webp)
Hypothesis testing is a structured approach to experimenting with changes to your marketing, sales, or product, using data rather than intuition to validate whether changes actually drive desired outcomes. Rather than implementing changes broadly and assuming they work, hypothesis testing runs controlled experiments: change one element, measure the impact, and decide whether to keep or discard the change based on data.
Hypothesis testing brings scientific rigour to growth decisions. Teams often implement changes based on hunches: 'This headline will perform better,' 'This messaging will resonate more.' Without testing, hunches frequently prove wrong. Hypothesis testing replaces hunches with data, improving decision accuracy and preventing costly mistakes.
B2B hypothesis testing often requires larger sample sizes because conversion volumes are lower. A B2B email test might need 500 test recipients to generate statistically significant results; the same test in B2C might need 50. Plan test scope accordingly.
Hypothesis testing prevents expensive mistakes. Implementing untested changes across all customers risks poor outcomes. Testing first allows you to confirm changes drive desired results before full implementation. This prevents launching ineffective messaging, pricing, or features to the entire customer base.
Hypothesis testing compounds learning over time. Each test provides insights into what resonates with your customers. Accumulated test results reveal patterns: which headlines work, which offers convert, which features drive engagement. These patterns guide future decisions with increased confidence.
Hypothesis testing improves team efficiency. Rather than debating whether a change will work, teams run tests and let data decide. This reduces bike-shedding, accelerates decision making, and builds team confidence in decisions: they're based on data, not politics or strongest opinion.
Start with clear hypotheses. Rather than vague 'test if this email performs better,' write: 'We believe that subject lines addressing specific ROI metrics will increase open rates by 5% because our target audience evaluates on financial impact.' Specific hypotheses make success criteria clear.
Change one variable per test. Testing multiple changes simultaneously makes it impossible to identify which change drove results. If you test both subject line and send time, and performance improves, which element caused it? Single-variable tests provide clear attribution.
Define sample size and duration before starting tests. Decide how many test recipients you need and how long you'll run tests before analysing results. This prevents temptation to stop tests early when results look positive (which often leads to false positives).
Analyse statistical significance, not just percentage difference. A 5% improvement might be meaningful or noise depending on sample size. Tools like A/B test calculators show whether improvements are statistically significant. Only implement changes where improvements are statistically significant at 95% confidence level.
A SaaS company hypothesised that new users struggling to complete onboarding were due to not understanding feature value. They tested two versions: (1) standard onboarding (feature walkthrough), (2) value-focused onboarding (showing three use cases, then walking through corresponding features). Both groups were tracked over 30 days. Users in the value-focused group (group 2) reached full product engagement at 45% rate, versus 28% for the standard group. This 17-point improvement was statistically significant. The company rolled out value-focused onboarding to all new users, improving overall user activation rate from 28% to 42%.
An enterprise software company tested email subject lines. Hypothesis: Addressing cost reduction (ROI angle) would outperform process improvement (efficiency angle) because CFOs (key decision-maker) prioritise cost. Test group 1 received emails with cost-reduction messaging. Test group 2 (control) received standard messaging. Sample size: 5000 per group. Run duration: 7 days. Cost-reduction messaging improved open rate from 18% to 24% and click rate from 4% to 6.2%. Improvements were statistically significant. The company shifted all sales follow-up emails to emphasise cost reduction, improving conversion rates downstream.
A B2B agency tested landing page layouts. Hypothesis: Placing customer logos prominently above the fold would increase form submissions by 10% because social proof reduces evaluation anxiety. They split traffic 50/50: version A (logos below fold), version B (logos above fold, prominent). 2000 visitors per version, one-week duration. Form submission rate: version A (3.2%), version B (5.1%). This 1.9-point improvement (59% increase) was statistically significant. The agency updated all landing pages to feature customer logos prominently, systematically improving conversion rates across campaigns.
How do you make all four engines work together instead of in isolation?

Build the dashboards and data pipelines that show your growth engines in one view so you can spot bottlenecks and make decisions in minutes, not meetings.

The wrong tools create friction. The right ones multiply your output without adding complexity. These are the tools I recommend for growth teams that move fast.
Analyse last cycle's results across all twelve metrics, identify the highest-leverage improvements, and set priorities that compound into the next period.
Pressure-test your strategy against market shifts, performance data, and team capacity so your direction stays relevant and ambitious.
Eric Ries
Rating
Rating
Rating
Rating
Rating

A disciplined approach to experiments. Define hypotheses, design MVPs and learn before you scale.
Random testing wastes time and teaches you nothing. Learn how to collect experiment ideas systematically and prioritise them based on potential impact so you always know what to run next.
Most experiments fail before they start because the hypothesis is vague or untestable. Learn how to write hypotheses that are specific enough to prove or disprove and tied to metrics that matter.
Define how you're different from alternatives in a way that matters to customers to guide all messaging and ensure consistent market perception.
Attract prospects through valuable content that solves real problems, building trust and generating qualified leads who approach you.
Design experiments that answer specific questions with minimum time and resources to maximise learning velocity without over-investing in unproven ideas.
Organise the tools that capture leads, nurture prospects, and measure performance to automate repetitive work and connect customer data across systems.
Document your repeatable processes in clear, step-by-step instructions that ensure consistency, enable delegation, and capture institutional knowledge.
Track campaign performance precisely by appending parameters to URLs that identify traffic sources, mediums, and campaigns in your analytics.
Structure experiments around clear predictions to focus efforts on learning rather than random changes and make results easier to interpret afterward.
Track your user journey through Acquisition, Activation, Retention, Referral, and Revenue to identify which stage constrains growth most.
Analyse profit per customer to determine if your business model works at scale before investing heavily in growth and customer acquisition.
Credit the channel that introduced prospects to your brand to measure awareness efforts and understand which top-of-funnel activities start customer journeys.
Track how fast your pipeline of ready-to-buy leads grows to forecast sales capacity needs and spot when lead quality or sales efficiency changes.
Document your ideal customer's role, goals, and challenges to tailor messaging and prioritise features that solve real problems they actually pay for.
Select metrics that reveal whether you're achieving strategic goals to track progress and identify problems before they become expensive to fix.
Measure the percentage of customers who stop paying to identify retention problems and calculate the true cost of growth in subscription businesses.
Measure which marketing activities drive desired outcomes to allocate budget toward channels that actually generate revenue instead of vanity metrics.
Organise customer and prospect information to track relationships, communication history, and next steps without losing context or duplicating effort.
Achieve the state where your product solves a genuine, urgent problem for a defined market that's willing to pay and actively pulling your solution in.
Log emails, calls, and meetings automatically to understand what drives deals forward and coach reps based on actual behaviour rather than guesswork.
Assemble tools that manage pipeline, automate outreach, and track performance to help reps sell more efficiently and managers forecast accurately.
Navigate competing priorities and secure buy-in by systematically understanding, influencing, and aligning internal decision-makers toward shared goals.