A pile of test ideas won’t help you grow. Prioritise by impact and feasibility to test smart.

My first conversion audit surfaced twenty-three issues, yet the team only fixed three. The rest died in Slack threads because no one owned a clear backlog. I realised that without a single list, insights stay trapped in notebooks and meetings.
Since then I have built a lightweight experimentation backlog for every client. The backlog turns raw research into ranked hypotheses, schedules tests and records outcomes in one place. It removes debate about what to test next and keeps the growth engine moving, sprint after sprint.
This chapter shows how I set up that system. We will create a growth backlog, write strong hypotheses, rank ideas by impact and lock each experiment into a clear brief. Follow along and your team will never argue over priorities again.
Start by creating the growth backlog. I use a simple Kanban board in Notion. Columns run left to right: Ideas, Ready, In progress, Completed and Archived. Each insight from audits or customer interviews becomes a card in the Ideas column.
Every card gets a consistent title that names the page, issue and metric. For example, “Pricing page – unclear savings – form-submit rate.” Consistent naming lets you search and filter at speed.
Add five mandatory fields inside the card: source link, problem statement, supporting data, rough lift estimate and owner. For lift I use a quick guess in per-cent, based on similar past wins. The owner is the person responsible for pushing the test forward, not the developer.
Move cards to Ready only when data backs the problem and design or copy resources are available. This gate prevents clutter and keeps momentum.
The backlog exists; next you need to write a hypothesis that turns each card into a testable claim.
A strong hypothesis follows the format: “If we change X for segment Y, then metric Z will improve because insight A.” Example: “If we add monthly cost comparison above the fold for self-serve prospects, form-submit rate will rise because interviews showed pricing confusion.”
Keep hypotheses short—two sentences max. Long explanations hide fuzzy thinking. Include only one change per hypothesis so results stay clear.
Attach the hypothesis to the backlog card under a dedicated field. This practice stops vague tasks such as “Rewrite headline” from entering development without purpose.
With clear hypotheses ready, you can now rank them to decide what to test first.
I rank hypotheses using the ICE framework: impact, confidence and ease, each scored from one to five. Impact estimates potential lift on the target metric. Confidence measures evidence strength—customer quotes and analytics bumps earn higher scores than gut feel. Ease reflects effort in hours to design, build and review.
Multiply the three scores for a total between one and one hundred and twenty-five. Sort the Ideas column by this total every Monday. The top five become the sprint candidates.
Review scoring openly with the team. If design argues that a “quick copy tweak” actually needs a new component, adjust the ease score and let the card fall accordingly.
Ranking yields a clear top pick. The next step is to define the experiment so developers and analysts know exactly what to build and track.
Create an experiment brief inside the backlog card. Outline test type (A/B or multivariate), variant description, primary metric, guardrail metric and duration. I set guardrails on bounce rate and qualified-lead quality to avoid accidental harm.
Specify sample size using a calculator. Aim for ninety-five per-cent confidence and a minimum detectable effect that matches your impact score. If traffic is low, bundle similar pages or extend duration rather than lowering statistical rigour.
Add technical notes: URL, targeting rules, tagging requirements and any necessary design assets. Link to Figma or copy docs so nothing blocks development.
Once the brief is complete move the card to In progress. After the test ends, log results, update the ICE scores and shift the card to Completed or Archived. This loop makes learning cumulative instead of circular.
The experiment is now live, closing the backlog cycle and feeding the next phase of test execution.
A disciplined experimentation backlog turns scattered insights into a steady queue of high-impact tests. Build the board, craft sharp hypotheses, rank by ICE and brief experiments with precision. Each step lifts booked-meeting rates while cutting debate and delay.
Implement this system today and future chapters on building and analysing A/B tests will slot in effortlessly. Your optimisation engine will shift from ad-hoc tweaks to a compounding, sprint-by-sprint growth machine.
Use what you already have. But if you're starting from scratch or want recommendations, these are the tools I use with clients and personally rely on. Consider this a bonus: helpful if you need it, completely optional if you don't.
Topic
Playbook
Know how to read experiment results like a pro so you don’t overreact to noise or miss a real lift hiding in the data.
Test and learn faster. Set up an experimentation system that helps you prioritise, track and repeat what works. Keep a backlog and a clear way to decide what to try next.
See playbook
Use what you already have. But if you're starting from scratch or want recommendations, these are the tools I use with clients and personally rely on. Consider this a bonus: helpful if you need it, completely optional if you don't.
Tactical playbooks for every stage of this engine. The playbooks are practical guides for tactical stuff. They complement the (paid) growth framework and help you with the tactics.
Pick a prospecting method and tidy data. Warm domains, protect deliverability, build short email and LinkedIn sequences, and route positive replies to the right owner with tasks in the CRM.
See playbook
Post consistently on LinkedIn with a routine that grows authority, attracts buyers and turns visibility into pipeline. Turn posts into warm leads without spam or gimmicks.
See playbook
The books that shaped how I think about growth. Read summaries here, then buy what resonates. Learn from the best thinkers in B2B.

Dave Gerhardt
A guide to purposeful visibility. Choose topics, set a cadence and turn posts, talks and interviews into warm conversations.

Keith J. Cunningham
A practical summary of how businesses really grow. Clear levers, simple maths and actions you can take this quarter.
Key concepts and frameworks explained clearly. Quick reference when you need to understand a term, refresh your knowledge, or share with your team.
Topic
Playbook
Map the buyer journey from attention to action, crafting messages that guide prospects through each stage to conversion.
Topic
Playbook
Qualify leads systematically by assessing budget, authority, need, and timing to focus sales effort on high-potential opportunities.
Most B2B marketers are either Random Ricks (trying everything) or Specialist Steves (obsessed with one channel). Generalists run tactics without strategy. Specialists hit channel ceilings. But there's a better way.

Tries everything at once. Posts on LinkedIn, runs ads, tweaks the website, chases referrals. Nothing compounds because nothing's consistent. Growth feels chaotic.

Obsessed with one tactic. 'We just need better ads' or 'SEO will fix everything.' Ignores the rest of the system. One strong engine can't carry a broken machine.

Finds the bottleneck. Fixes that first. Then moves to the next weakest link. Builds a system that's predictable, measurable and doesn't need 80-hour weeks.
Learn how she diagnoses bottlenecks, orchestrates the four engines, and drives predictable growth. Choose if you want to read or watch:
Get practical frameworks delivered daily. Seven short emails explain how Sarah diagnoses bottlenecks, orchestrates the four engines, and builds systems that compound.
Free 45-minute video module from the full course. Watch how to diagnose your growth bottleneck and see exactly what the course platform looks like.
A pile of test ideas won’t help you grow. Prioritise by impact and feasibility to test smart.
