A/B test landing pages

Test one element at a time starting with highest-traffic pages, track conversion rate by segment, prioritise headline and proof tests, then scale winners across similar pages.

Introduction

Page optimisation never stops. Your first version won't be optimal. Testing finds what works.

Test systematically. One element at a time so you know what drove the change. Highest-traffic pages first so you get results fast. Track by segment so you don't average away insights.

This chapter shows you what to test, in what order, and how to apply learnings across pages.

Start with highest-traffic pages

Pull your traffic data from the last 30 days. Rank pages by visitors.

Example ranking:

  1. Paid-skeptic page: 2,200 visitors, 3.2% conversion
  2. Compliance-driven page: 1,800 visitors, 2.8% conversion
  3. Proactive page: 1,200 visitors, 3.5% conversion
  4. Tool-chooser page: 1,100 visitors, 4.1% conversion

Start testing on paid-skeptic page (highest traffic). You'll get results fastest. A test that needs 1,000 visitors to reach significance takes 2 weeks on this page vs 6 weeks on tool-chooser page.

Exception: if a low-traffic page has terrible conversion (under 1.5%), fix obvious problems first before systematic testing. Broken CTA, mismatched headline, wrong proof type. Get it to baseline (2.5% to 3%), then test optimisations.

Test above the fold first

Headlines drive 50% of conversion impact. Test these before proof type or CTA wording.

Test structure: Run 2 to 3 headline variants against control. Keep everything else identical (subhead, proof, CTA, images). Run for 2 to 4 weeks or until 95% statistical significance.

Paid-skeptic page headline test:

Control: "Email outbound: £50 CAC vs £400 on paid ads"Variant A: "Cut your CAC from £400 to £50 with email outbound"Variant B: "Replace paid ad spend with owned email channel"

Hypothesis: Control uses comparison (£50 vs £400). Variant A uses outcome (cut CAC). Variant B uses benefit (own the channel). Testing which angle resonates more.

Run test for 3 weeks. Results:

  • Control: 3.2% conversion (baseline)
  • Variant A: 3.8% conversion (+19% lift)
  • Variant B: 2.6% conversion (-19% drop)

Winner: Variant A. Outcome framing (cut CAC from £400 to £50) beat comparison framing. Make Variant A the new control.

Apply learning: Test similar outcome framing on other pages. On compliance-driven, test "Pass ISO 27001 in 94 days" vs current "87% pass rate" headline.

Compliance-driven page headline test:

Control: "87% of clients pass ISO 27001 audits first time"Variant A: "Pass ISO 27001 in 94 days with validated training"Variant B: "204 companies certified, 87% first-time pass rate"

Hypothesis: Control leads with stat. Variant A leads with timeline outcome. Variant B leads with social proof (volume + stat).

Run test for 4 weeks (lower traffic than paid-skeptic). Results:

  • Control: 2.8% conversion (baseline)
  • Variant B: 3.4% conversion (+21% lift)
  • Variant A: 2.9% conversion (+4%, not significant)

Winner: Variant B. Social proof (204 companies) added credibility to the stat. Make Variant B the new control.

Test proof type and placement

After headline is optimised, test what proof you show and where you show it.

Proof type test (compliance-driven page):

Control: Data proof section (87% pass rate, 94-day timeline, stats-focused)Variant A: Testimonial proof section (3 compliance officer quotes with photos)Variant B: Mixed proof (1 stat + 1 testimonial + 1 process diagram)

Hypothesis: Compliance-driven is risk-averse. Testimonials might outperform pure data.

Run test for 4 weeks. Results:

  • Control (data): 3.4% conversion (current baseline after headline test)
  • Variant A (testimonials): 4.1% conversion (+21% lift)
  • Variant B (mixed): 3.9% conversion (+15% lift)

Winner: Variant A (testimonials). Risk-averse buyers trust peer validation more than stats. Make testimonials primary proof, keep stats as supporting data.

Proof placement test (paid-skeptic page):

Control: Economic breakdown in third section (after intro and problem)Variant A: Economic breakdown immediately below headline (above fold)Variant B: Economic breakdown as visual (chart) in first section instead of text

Hypothesis: Analytical buyers want data fast. Moving it higher or making it visual might improve conversion.

Run test for 3 weeks. Results:

  • Control (third section): 3.8% conversion (current baseline)
  • Variant A (above fold): 4.3% conversion (+13% lift)
  • Variant B (visual first): 4.6% conversion (+21% lift)

Winner: Variant B. Chart showing £400 vs £50 CAC above fold performed best. Analytical buyers scan visuals faster than text. Make chart the hero element.

Test CTA wording and placement

After headline and proof are optimised, test CTA.

CTA wording test (tool-chooser page):

Control: "Start trial"Variant A: "Try Lemlist free for 14 days"Variant B: "See how Lemlist compares"

Hypothesis: Control is direct. Variant A adds specifics (free, 14 days). Variant B offers comparison (matches their evaluation mindset).

Run test for 3 weeks. Results:

  • Control: 4.1% conversion (baseline)
  • Variant A: 4.7% conversion (+15% lift)
  • Variant B: 3.8% conversion (-7%)

Winner: Variant A. Specifics (free, 14 days) reduced perceived risk. "Start trial" felt like commitment, "Try free for 14 days" felt like test.

CTA placement test (proactive page):

Control: One CTA above fold ("Download business case"), one CTA in footerVariant A: CTA above fold + mid-page CTA after ROI sectionVariant B: CTA above fold + mid-page + sticky CTA that follows scroll

Hypothesis: Multiple CTAs catch people at different scroll depths. Sticky CTA ensures CTA is always visible.

Run test for 4 weeks. Results:

  • Control (2 CTAs): 3.5% conversion (baseline)
  • Variant A (3 CTAs): 3.9% conversion (+11% lift)
  • Variant B (sticky CTA): 4.2% conversion (+20% lift)

Winner: Variant B. Sticky CTA meant CTA was always visible. No hunting for conversion point. Implement sticky CTA across all pages.

Apply learnings across similar pages

When test wins on one page, apply the learning to similar pages without re-testing everything.

Example 1: Headline outcome framing

Test on paid-skeptic: Outcome framing ("Cut CAC from £400 to £50") beat comparison framing, +19% lift

Apply to similar pages:

  • LinkedIn-first page: Change "Email deliverability: 94% inbox. LinkedIn: spam filtered" to "Reach inbox 94% of the time vs LinkedIn spam filters"
  • List-skeptic page: Change "12% response vs 0.8%" to "Boost response rate from 0.8% to 12% with built lists"

Don't re-test on each page. Apply the pattern (outcome framing beats comparison) and monitor conversion. If it drops, revert. If it holds or improves, keep it.

Example 2: Testimonials for risk-averse segments

Test on compliance-driven: Testimonials beat data proof, +21% lift

Apply to similar risk-averse segments:

  • Breach-reactive page: Add customer testimonials from security leads who implemented fast
  • Any future compliance-focused pages in other industries

Don't re-test the proof type choice. Test won, apply pattern to similar buyer psychology.

Example 3: Visual data for analytical segments

Test on paid-skeptic: Chart above fold beat text breakdown, +21% lift

Apply to analytical segments:

  • Tool-chooser page: Make feature comparison table the hero element (visual data)
  • LinkedIn-first page: Make deliverability comparison chart primary visual

Apply the pattern (analytical buyers scan visuals faster) without re-testing each instance.

Track conversion rate by segment

Don't average conversion across all traffic. Track by segment to see segment-specific patterns.

Set up tracking:

  • Tag incoming traffic by source (LinkedIn proof ad = compliance-driven, Google "outbound vs paid" = paid-skeptic)
  • Track conversion rate per segment per page
  • Identify if certain segments convert better on certain pages

Example insights:

Compliance-driven traffic:

  • Converts 3.4% on /compliance page (matched page)
  • Converts 1.2% on /security-culture page (wrong page, ROI focus doesn't match their need)
  • Action: Ensure LinkedIn proof ads link to /compliance, not generic /security-culture

Paid-skeptic traffic:

  • Converts 4.6% on /outbound-roi page (matched page)
  • Converts 2.8% on /list-building page (related but not exact match)
  • Action: Keep paid-skeptic traffic on /outbound-roi, don't dilute with generic list building content

Tool-chooser traffic from "Lemlist vs Apollo" search:

  • Converts 4.7% on dedicated comparison page
  • Converts 2.1% on generic /outbound-roi page (not specific enough)
  • Action: Build dedicated comparison page if "Lemlist vs Apollo" search volume justifies it

Segment-specific tracking prevents you from averaging away insights. A page might show 3.5% overall conversion but actually be 5% for matched traffic and 1.5% for mismatched traffic.

Conclusion

Test systematically: start with highest-traffic pages for fast results, test headlines first (50% of impact), then proof type and placement, then CTA wording and placement. Test one element at a time. Track by segment, not overall average. Apply winning patterns across similar pages without re-testing everything. Your pages improve with every test, conversion compounds over time.

Related tools

No items found.

Related wiki articles

No items found.

Further reading

Landing pages

Landing pages

Test one element at a time starting with highest-traffic pages, track conversion rate by segment, prioritise headline and proof tests, then scale winners across similar pages.