A proper hypothesis has three parts: current belief (what we think is true now), predicted outcome (what we expect to happen), and mechanism (why we think it'll happen).
Bad hypothesis: "Let's test adding testimonials to the landing page." No prediction, no mechanism. This isn't a hypothesis, it's just an action.
Good hypothesis: "Compliance-driven segment doubts that engaging training satisfies auditors. Adding testimonials from compliance officers at similar companies will reduce this doubt and improve lead conversion from 4% to 5%. The mechanism is social proof reducing risk perception."
Now you've stated: what you believe (compliance segment doubts auditor acceptance), what you expect (5% improvement in lead conversion), and why (social proof reduces risk perception). When you run the test, you can evaluate not just whether conversion improved, but whether the mechanism was correct.
If conversion improves but exit surveys show people still doubt auditor acceptance (mechanism was wrong), you've learned something different than if conversion improves and exit surveys show increased confidence in auditor acceptance (mechanism was right). Both results inform future experiments.
Example hypotheses for cybersecurity training:
1. "Proactive segment needs ROI proof to get budget approval. Adding an ROI calculator to the demo page will improve demo booking rate from 8% to 10% by giving them a tool to build the business case internally."
2. "Breach-reactive segment is in crisis mode and needs immediate deployment. Emphasising '30-minute setup' in ad headlines will improve CTR from 1.5% to 2% by addressing their urgency concern."
3. "SQLs aren't becoming opportunities because implementation seems complex. Offering a free pilot (3 users, 30 days) will improve SQL → opportunity conversion from 33% to 40% by reducing perceived risk."
Each hypothesis predicts an outcome, specifies the mechanism, and sets a measurable target. This structure forces clarity about what you're testing and why.