Find bottlenecks through research and data

Look at your dashboard to identify where conversion drops, then use customer research to understand why. Bottlenecks are where the most impact lives.

Introduction

Before running experiments, you need to know where to focus. Most companies test random ideas ("let's try a green button") without understanding where the real problems are. This wastes time testing things that don't matter.

The systematic approach: analyse your dashboard to find conversion bottlenecks, use customer research to understand why people get stuck at those points, review competitive positioning to identify gaps, and prioritise experiments based on potential impact.

This chapter shows you how to identify bottlenecks using data, extract insights from customer conversations, spot competitive opportunities, and build an experiment backlog prioritised by impact potential.

Analyse your dashboard to find conversion drops

VWO

VWO

Rating

Rating

Rating

Rating

Rating

From

393

per month

VWO provides A/B testing, personalisation, and behaviour analytics to optimise website conversion rates through data-driven experimentation.

Hotjar

Hotjar

Rating

Rating

Rating

Rating

Rating

From

39

per month

Hotjar captures user behaviour through heatmaps, session recordings, and feedback polls to understand how visitors use your website.

Microsoft Clarity

Microsoft Clarity

Rating

Rating

Rating

Rating

Rating

From

0

per month

Microsoft Clarity provides free session recordings, heatmaps, and user behaviour analytics without traffic limits or time restrictions.

Notion

Notion

Rating

Rating

Rating

Rating

Rating

From

12

per month

Flexible workspace for docs, wikis, and lightweight databases ideal when you need custom systems without heavy project management overhead.

Start with your growth orchestration dashboard showing conversion rates at each lifecycle stage: visitors → engaged visitors → leads → MQLs → SQLs → opportunities → customers. Where's the biggest drop?

Example dashboard for cybersecurity training: 10,000 visitors → 6,000 engaged (60% engagement rate, solid) → 300 leads (5% capture rate, decent) → 180 MQLs (60% MQL rate, good) → 90 SQLs (50% SQL rate, acceptable) → 30 opportunities (33% opportunity rate, this is the bottleneck) → 9 customers (30% close rate, fine).

The bottleneck is SQL → opportunity. 67% of SQLs aren't becoming opportunities. That's where to focus experiments. Why aren't qualified sales leads turning into active opportunities?

Also look at segment-specific conversion rates. Maybe overall SQL → opportunity rate is 33%, but compliance-driven converts at 45% whilst proactive converts at 20%. Now you know: focus experiments on improving proactive segment conversion specifically, not blended improvement.

Volume versus conversion trade-off: Sometimes the biggest bottleneck isn't the lowest conversion rate, it's where volume is highest. If visitors → engaged has 40% conversion (lower than your 60% engaged → lead rate), but visitors represent 10,000 people whilst engaged represents 6,000, improving engagement rate by 5% yields 500 more engaged visitors. Improving lead capture by 5% yields 300 more leads. The smaller percentage improvement on higher volume has bigger absolute impact.

Calculate potential impact: [volume at stage] × [conversion rate improvement] = additional customers. Prioritise experiments on stages where small improvements yield large customer gains.

Use customer research to understand why bottlenecks exist

Data tells you where people drop off. Research tells you why. Interview customers who successfully converted and those who didn't to understand what creates bottlenecks.

For the SQL → opportunity bottleneck: interview SQLs who became opportunities ("What convinced you to move forward?") and SQLs who didn't ("What stopped you?"). Their answers reveal the belief gaps that create the bottleneck.

Example insights from cybersecurity training: SQLs who became opportunities said: "The demo showed exactly how behaviour tracking works, I could see proving ROI" (demo quality matters), "Our rep understood our compliance requirements specifically" (rep knowledge matters), "We could start with a small pilot" (risk reduction matters).

SQLs who didn't become opportunities said: "We couldn't get budget approval without clearer ROI data" (ROI proof insufficient), "Implementation seemed complex, IT said they're too busy" (implementation concern), "We went with a competitor who offered quarterly payment" (pricing flexibility matters).

These insights suggest experiments: improve demo quality to show behaviour tracking clearly, train reps on industry-specific compliance, create ROI templates that SQLs can use internally, simplify implementation messaging, add payment flexibility.

Also interview churned customers and lost deals. "What almost stopped you from buying?" reveals friction points. "Why did you leave?" reveals unmet expectations. These become experiments to fix before they cause more loss.

Conduct 6-8 interviews per bottleneck. Look for patterns, not individual complaints. If 1 person mentions pricing flexibility, it's an anecdote. If 5 mention it, it's a pattern worth testing.

Review competitive positioning to identify gaps

Analyse competitor messaging and positioning. Where do they emphasise things you don't? Those might be belief gaps you're not addressing, which creates bottlenecks.

For cybersecurity training: Competitor A emphasises "no IT setup required" prominently. Competitor B emphasises "quarterly payment options". Competitor C emphasises "industry-specific content". You emphasise "behaviour tracking and ROI proof". Each positioning addresses different doubts.

If prospects are comparing you to Competitor B and you're losing 60% of competitive deals to them, their quarterly payment emphasis might be addressing a belief gap you're ignoring. Experiment: test payment flexibility messaging and offers.

Also review competitor landing pages, ad creative, demo approaches. What proof types do they use that you don't? Competitor showing customer video testimonials whilst you show stats? Maybe testimonials address credibility doubts better for certain segments. Test adding testimonials.

Use competitive review sites (G2, Capterra) to see what customers say about competitors. "Pros: Easy setup, fast deployment. Cons: Limited behaviour tracking." This tells you: competitors win on ease, you might win on sophistication. Experiment: emphasise ease more (to compete) or emphasise sophistication more (to differentiate). Test both angles.

Don't just copy competitors. Understand what belief gaps their positioning addresses, then test whether addressing those gaps improves your conversion without diluting your differentiation.

Prioritise experiments by potential impact

Build an experiment backlog with potential impact scores. Don't just make a list and test randomly. Calculate expected value of each experiment.

Impact formula: [stage volume] × [estimated conversion lift] × [confidence level] = expected customers gained. Stage volume is how many people reach this stage per month. Estimated conversion lift is your hypothesis about improvement (5% lift? 20% lift?). Confidence level is how sure you are it'll work (70% confident? 90% confident?).

Example: SQL → opportunity bottleneck (90 SQLs per month, currently 33% convert, 60 opportunities). Experiment hypothesis: adding ROI calculator to sales process will improve conversion 10% (from 33% to 36%). Confidence: 60% (not sure it'll work). Expected value: 90 SQLs × 3% lift × 60% confidence = 1.6 additional customers per month = 19 additional customers per year.

Compare to another experiment: Improve landing page headline for compliance segment (1,500 visitors per month to this page, currently 4% convert to lead, 60 leads). Hypothesis: outcome-focused headline will improve conversion 15% (from 4% to 4.6%). Confidence: 80% (similar tests worked before). Expected value: 1,500 × 0.6% lift × 80% confidence = 7.2 additional leads per month. But these leads convert to customers at 2% rate, so 7.2 × 0.02 = 0.14 additional customers per month = 1.7 per year.

The SQL → opportunity experiment has higher expected value (19 customers versus 1.7 customers), test it first. But also consider effort. If landing page headline test takes 2 hours and SQL experiment takes 40 hours, ROI might favour the quick win.

Balance impact and effort: high impact + low effort = test immediately. High impact + high effort = schedule carefully. Low impact + low effort = test when you have spare capacity. Low impact + high effort = don't test at all.

Conclusion

Find bottlenecks by analysing your dashboard for conversion drops. Calculate potential impact: volume at stage × conversion rate improvement = additional customers. Prioritise stages where small improvements yield large gains.

Use customer research to understand why bottlenecks exist. Interview people who converted (what convinced them) and people who didn't (what stopped them). Look for patterns across 6-8 interviews, not individual anecdotes.

Review competitive positioning to identify belief gaps you're not addressing. Analyse competitor messaging, proof types, and customer reviews. Test whether addressing gaps competitors emphasise improves your conversion without diluting differentiation.

Prioritise experiments by expected value: stage volume × estimated lift × confidence level. Balance impact and effort. Test high-impact low-effort experiments first. Skip low-impact high-effort experiments entirely.

Next chapter: write proper hypotheses and design experiments with controls.

Related tools

VWO

Rating

Rating

Rating

Rating

Rating

From

393

per month

VWO

VWO provides A/B testing, personalisation, and behaviour analytics to optimise website conversion rates through data-driven experimentation.

Hotjar

Rating

Rating

Rating

Rating

Rating

From

39

per month

Hotjar

Hotjar captures user behaviour through heatmaps, session recordings, and feedback polls to understand how visitors use your website.

Microsoft Clarity

Rating

Rating

Rating

Rating

Rating

From

0

per month

Microsoft Clarity

Microsoft Clarity provides free session recordings, heatmaps, and user behaviour analytics without traffic limits or time restrictions.

Notion

Rating

Rating

Rating

Rating

Rating

From

12

per month

Notion

Flexible workspace for docs, wikis, and lightweight databases ideal when you need custom systems without heavy project management overhead.

Related wiki articles

A/B testing

Compare two versions of a page, email, or feature to determine which performs better using statistical methods that isolate the impact of specific changes.

Hypothesis testing

Structure experiments around clear predictions to focus efforts on learning rather than random changes and make results easier to interpret afterward.

Minimum viable test

Design experiments that answer specific questions with minimum time and resources to maximise learning velocity without over-investing in unproven ideas.

Prioritisation

Systematically rank projects and opportunities using objective frameworks, ensuring scarce resources flow to highest-impact work.

Lead capture rate

The percentage of engaged website visitors who submit their contact information and become leads.

Further reading

Experimentation

Experimentation

Look at your dashboard to identify where conversion drops, then use customer research to understand why. Bottlenecks are where the most impact lives.