Every prioritisation framework is essentially the same. You're trying to estimate impact, confidence, and effort, then sort by some combination of those factors.
ICE scores each idea from 1-10 on Impact, Confidence, and Ease, then averages them. PIE uses Potential, Importance, and Ease. Some teams use weighted formulas. Others just use high/medium/low ratings.
Pick whatever framework your team will actually use. The specific scoring method matters less than being consistent and honest. Most teams inflate scores for ideas they're excited about and deflate scores for ideas that seem boring but important. Fight that tendency.
The only filter that really matters is revenue impact. An experiment that improves a metric nobody cares about is a waste of time, even if it wins. Before scoring any idea, ask: if this works, how does it affect revenue? If the answer is unclear or indirect, score it lower.
A backlog that never gets reviewed becomes a graveyard of abandoned ideas. A backlog that gets reviewed constantly becomes a distraction from actually running tests.
Review your backlog when you need to decide what to test next. That might be weekly if you're running fast experiments on a high-traffic site, or monthly if you're in a slower B2B context with longer test cycles.
During each review, do three things. First, add any new ideas that came up since the last review. Second, re-score ideas if you've learned something that changes your estimate of impact or confidence. Third, archive ideas that are no longer relevant because the page changed, the problem was solved another way, or you've learned enough to know the idea won't work.
The goal is a backlog that's small enough to be useful. If you have 200 ideas sitting there, you're not going to read through them every time you need to pick a test. Aim for 20-30 active ideas, with older or lower-priority items archived somewhere you can search if needed.
Your backlog should reflect your current growth priorities, not just random opportunities to improve things.
If your quarterly focus is improving activation rate, your top-scored experiments should target that metric. If you're trying to increase deal size, your backlog should include tests on pricing pages and upgrade flows. The backlog isn't separate from your growth strategy; it's one of the tools for executing it.
This also means your priorities will shift. An experiment idea that scored highly last quarter might drop in priority this quarter because you're focused on a different engine. That's fine. Re-score based on current priorities, not historical scores.
When you review your backlog, start by reminding yourself what you're trying to improve right now. Then look at which ideas directly target that metric. Those go to the top.