How to analyse experiment results

Interpret data correctly. Calculate statistical significance. Distinguish signal from noise. Extract insights that inform next experiments.

How to analyse experiment results

Introduction

Data without interpretation is just numbers. Analysis turns results into decisions. Did the experiment win, lose, or produce inconclusive results? Was the change statistically significant or just noise? What did we learn that applies beyond this specific test? This chapter teaches you how to read results properly, calculate confidence intervals, spot patterns, and document insights so every experiment win or lose makes you smarter.

Calculate statistical significance correctly

Begin by analysing the experiment in three passes. First, validate data integrity: confirm traffic split, event firing and sample size. If the variant received less than thirty-five per cent of traffic or tracking broke for a day, mark the test invalid and rerun.

Second, read the primary metric. Use the testing tool’s statistics and insist on ninety-five per cent confidence. Do not declare a winner early; random spikes flatten with time.

Third, inspect secondary metrics and segments. A headline that lifts overall leads might hide a drop among enterprise visitors. Segment wins that hurt strategic accounts are not real wins.

Log each pass in the backlog card. Note exclusions, confidence and any surprising segment swings. This discipline feeds the story you will share next.

The numbers are clear; now you need people to act on them, which is the focus of the next section.

Distinguish real results from random variation

Write a short results story. Start with a one-line headline: “Variant lifted booked meetings by nine per cent at ninety-five per cent confidence.” Add two bullet paragraphs. The first explains why you tested the change, anchored in the original hypothesis. The second states what you recommend: roll out, iterate or drop.

Include one chart only. A simple bar or line showing cumulative conversions keeps eyes on the message. Flooding stakeholders with p-values and z-scores dilutes impact.

Deliver the story in the team channel and at the next stand-up. Invite one question then move on. Concise reporting builds trust and keeps the backlog moving.

With the story told, you must store the insight where future teammates can find it. The next section covers building that library.

Extract learnings beyond the immediate test

Create a learning library in Notion. Each experiment becomes a row with standard fields: date, page, hypothesis, outcome, lift, audience notes and link to the results story.

Add two tags: win or loss. Losses matter because they save repeat effort. Colour-code tags for quick scanning.

Schedule a monthly review. Filter the table for wins on similar pages and compile a pattern report. For example, three headline tests may reveal that specific numbers beat generic benefits on pricing pages.

The library now houses cumulative knowledge. Next you will convert that knowledge into sharper future experiments.

Decide whether to ship, iterate, or kill

Start each sprint planning session with a five-minute scan of the library. Search for tests on the same page type or audience. Copy the winning logic into new hypotheses rather than guessing again.

If a past loss contradicts a new idea, demand stronger evidence before approving the build. This gate conserves design and development hours.

Every quarter aggregate learning into a playbook: evergreen principles like “Use the buyer’s job title in call-to-action buttons lifts click-through on demo forms.” Share the playbook with marketing and product teams so experiments influence broader strategy.

The loop now feeds itself: learn, store, apply, and grow. A brief recap cements the habit.

Conclusion

Learning is the real asset of experimentation. Rigorous analysis secures valid results, a crisp story converts data into action, a living library safeguards insight and regular reviews turn past work into faster future wins.

Adopt this cycle and booked-meeting rates will climb with each sprint while team confidence soars. Your optimisation engine is now complete and ready to compound month after month.

Next chapter

Continue reading

Article
6

How to share results with your team

Document experiment learnings. Communicate outcomes clearly. Build institutional knowledge so the whole organisation benefits from tests.

Playbook

Experimentation

Random experiments waste time and budget. A structured framework ensures every test teaches you something, even when it fails. Decide what to test, design experiments properly, analyse results accurately, and share learnings so the whole team gets smarter.

See playbook
Experimentation
Tools

Relevant tools

VWO
Tool

VWO

VWO provides A/B testing, personalisation, and behaviour analytics to optimise website conversion rates through data-driven experimentation.

Hotjar
Tool

Hotjar

Hotjar captures user behaviour through heatmaps, session recordings, and feedback polls to understand how visitors use your website.

Microsoft Clarity
Tool

Microsoft Clarity

Microsoft Clarity provides free session recordings, heatmaps, and user behaviour analytics without traffic limits or time restrictions.

Notion
Tool

Notion

Flexible workspace for docs, wikis, and lightweight databases ideal when you need custom systems without heavy project management overhead.

Growth wiki

Growth concepts explained in simple language

Wiki

A/B testing

Compare two versions of a page, email, or feature to determine which performs better using statistical methods that isolate the impact of specific changes.

Wiki

Statistical significance

Determine whether experiment results reflect real differences or random chance to avoid making expensive decisions based on noise instead of signal.

Wiki

P-value

Interpret experiment results to understand the probability that observed differences occurred by chance rather than because your changes actually work.

Wiki

Conversion rate

Calculate the percentage of visitors who complete desired actions to identify friction points and measure the effectiveness of marketing and product changes.

Wiki

Control group

Maintain an unchanged version in experiments to isolate the impact of your changes and prove causation rather than correlation with external factors.