Forecasting is a skill. Track accuracy and improve over time.
Record forecast vs actuals
At start of each month/quarter, document your forecast:
Forecast for Q4 2025:
- Best case: £1.2M
- Most likely (weighted): £950k
- Worst case: £750k
- Commit: £850k (what you tell leadership you'll deliver)
Save this record. Don't change it mid-quarter.
Compare at period end
When quarter closes, compare forecast to actual:
Q4 2025 Results:
- Actual closed: £920k
- Forecast (most likely): £950k
- Accuracy: 97%
If you consistently forecast within ±10% of actuals, you're accurate.
Analyse forecast misses
When forecast was significantly wrong, analyse why:
Upside surprise (actual > forecast):
- Which unexpected deals closed?
- Why weren't they in forecast? (Moved faster than expected? New deals appeared and closed quickly?)
- Should you be more aggressive in forecasting?
Downside miss (actual < forecast):
- Which expected deals didn't close?
- Why not? (Slipped to next quarter? Closed-lost? Still negotiating?)
- Were close dates unrealistic?
- Were stage probabilities too optimistic?
- Did reps sandbag (hide pipeline to appear to overperform)?
Document learnings. Adjust forecasting methodology based on findings.
Improve forecast process
Monthly forecast calls:
Hold forecast calls with each rep mid-month and month-end:
Questions to ask:
- Which deals will close this month?
- What gives you confidence they'll close?
- What could prevent them from closing?
- Are close dates realistic or aspirational?
- Show me activities supporting progression (recent calls, emails, meetings)
Push reps to justify their forecast with evidence, not gut feeling.
Commit vs best case vs most likely:
Teach reps to forecast three scenarios:
Commit: What I'm confident will close. I'll be fired if this doesn't happen.
Most likely: Weighted forecast. What data says will probably close.
Best case: If everything goes perfectly, this is the upside.
You tell leadership "Commit" number. If you beat it, great. If you miss Commit, you've failed.
Most likely is your internal working forecast. Best case shows potential upside if deals accelerate.
Forecast accuracy as performance metric
Track each rep's forecast accuracy over time.
Create report: "Forecast accuracy by rep."
For each rep, compare:
- What they forecasted at start of month
- What they actually closed at month-end
- Accuracy percentage
Reps consistently accurate (±10%) are trustworthy. Reps consistently over-forecasting are either bad at qualifying or sandbag intentionally. Reps consistently under-forecasting are sandbagging to make themselves look good when they "beat" forecast.
Coach reps toward accuracy, not conservatism or optimism.