Opinions are cheap. Data is expensive but worth it. Build an experimentation system that produces valid results and compounds your learnings.

Most B2B companies either test nothing or test everything. The first group makes changes based on gut feeling and wonders why results are inconsistent. The second group runs dozens of experiments without a clear thesis and learns nothing useful.
Effective experimentation sits in the middle. You form a hypothesis based on data and customer insight, you test it properly, and you document what you learn so the knowledge compounds over time.
I've been running A/B tests since 2012, starting with Optimizely and Google Optimize, now primarily using VWO. The tools have changed, but the fundamentals haven't. A good experiment starts with noticing something odd in the data, understanding why through customer research, and then testing a specific change to see if your theory holds.
This playbook covers how to build a systematic experimentation practice: collecting and prioritising ideas, writing hypotheses that are actually testable, setting up experiments that produce valid results, and documenting learnings so they don't disappear into a spreadsheet nobody reads.
The goal isn't to run more tests. It's to run better tests that teach you something regardless of whether they win or lose.
Random testing wastes time and teaches you nothing. Learn how to collect experiment ideas systematically and prioritise them based on potential impact so you always know what to run next.
Most experiments fail before they start because the hypothesis is vague or untestable. Learn how to write hypotheses that are specific enough to prove or disprove and tied to metrics that matter.
A winning test means nothing if the setup was flawed. Learn how to configure experiments properly in VWO, ad platforms, and email tools so your results are actually valid.
Statistical significance is just the beginning. Learn how to interpret results correctly, avoid false positives, and turn winning experiments into permanent improvements across your growth engines.
Eric Ries
Rating
Rating
Rating
Rating
Rating

A disciplined approach to experiments. Define hypotheses, design MVPs and learn before you scale.
Sean Ellis
Rating
Rating
Rating
Rating
Rating

A practical framework for experiments and insights. Build loops, run tests and adopt a cadence that ships learning every week.
Calculate how many users you need in experiments to detect meaningful differences and avoid declaring winners prematurely based on insufficient data.
Interpret experiment results to understand the probability that observed differences occurred by chance rather than because your changes actually work.
Design experiments that answer specific questions with minimum time and resources to maximise learning velocity without over-investing in unproven ideas.
Structure experiments around clear predictions to focus efforts on learning rather than random changes and make results easier to interpret afterward.
Maintain an unchanged version in experiments to isolate the impact of your changes and prove causation rather than correlation with external factors.
Determine whether experiment results reflect real differences or random chance to avoid making expensive decisions based on noise instead of signal.
Compare two versions of a page, email, or feature to determine which performs better using statistical methods that isolate the impact of specific changes.
Deploy fast, low-cost experiments to discover scalable acquisition and retention tactics, learning through iteration rather than big bets.
The cockpit that sits above your four growth engines. Individual teams can excel at their own metrics, but without orchestration they're musicians playing different songs. This is where everything comes together and where improvements in one engine amplify gains in another.


Pipeline doesn't fill itself. These tools help you identify who to target, reach them at scale, and create content that earns attention in crowded markets.

The wrong tools create friction. The right ones multiply your output without adding complexity. These are the tools I recommend for growth teams that move fast.

Deals slip through cracks when your sales stack doesn't work together. These tools keep your pipeline visible, your follow-ups timely, and your process tight.

Acquiring customers is expensive. These tools help you keep them longer and grow their accounts so your acquisition costs actually pay off over time.

Traffic means nothing if it doesn't convert. These tools help you capture leads, nurture them automatically, and understand what's actually working in your funnel.
HubSpot is powerful when configured properly and a mess when it's not. Set up your instance correctly from the start so your data stays clean and your team trusts the system.