How to decide what to test

Build an experiment backlog. Score by impact and effort. Align tests with your biggest bottleneck to avoid random testing.

How to decide what to test

Introduction

Most teams run experiments based on whoever shouts loudest. The result? Random tests that don't address real bottlenecks. A proper backlog captures every idea, scores it objectively, and ensures you're always testing what matters most. This chapter shows you how to build a backlog, prioritise experiments systematically, and align testing with your growth strategy so every experiment moves you closer to your goal.

Capture all experiment ideas in a backlog

Score experiments on impact versus effort

Align tests with your biggest bottleneck

Build a quarterly experiment roadmap

Conclusion

Tools

Relevant tools

VWO
Tool

VWO

VWO provides A/B testing, personalisation, and behaviour analytics to optimise website conversion rates through data-driven experimentation.

Hotjar
Tool

Hotjar

Hotjar captures user behaviour through heatmaps, session recordings, and feedback polls to understand how visitors use your website.

Microsoft Clarity
Tool

Microsoft Clarity

Microsoft Clarity provides free session recordings, heatmaps, and user behaviour analytics without traffic limits or time restrictions.

Notion
Tool

Notion

Flexible workspace for docs, wikis, and lightweight databases ideal when you need custom systems without heavy project management overhead.

Next chapter

Continue reading

Article
2

How to design growth experiments

Set clear hypotheses. Define success metrics. Calculate sample sizes. Structure experiments that produce valid, actionable results.

Playbook

Experimentation

Random experiments waste time and budget. A structured framework ensures every test teaches you something, even when it fails. Decide what to test, design experiments properly, analyse results accurately, and share learnings so the whole team gets smarter.

See playbook
Experimentation
Growth wiki

Growth concepts explained in simple language

Wiki

A/B testing

Compare two versions of a page, email, or feature to determine which performs better using statistical methods that isolate the impact of specific changes.

Wiki

Hypothesis testing

Structure experiments around clear predictions to focus efforts on learning rather than random changes and make results easier to interpret afterward.

Wiki

Minimum viable test

Design experiments that answer specific questions with minimum time and resources to maximise learning velocity without over-investing in unproven ideas.

Wiki

Prioritisation

Systematically rank projects and opportunities using objective frameworks, ensuring scarce resources flow to highest-impact work.

Wiki

Conversion rate

Calculate the percentage of visitors who complete desired actions to identify friction points and measure the effectiveness of marketing and product changes.