← Back to Journal
OPERATIONS/ APR 22, 2026/ 8 MIN READ

Why your sales forecast is wrong (and what to do about it).

Devansh Iyer
VP REVENUE · SURYAM LABS · BENGALURU
Why your sales forecast is wrong

A typical mid-market revenue org misses its quarterly forecast by 20–40%, and the post-mortem usually produces three culprits: sandbagging reps, optimistic stage probabilities, and "the market". The real reason is simpler — and fixable.

Most pipeline-weighted forecasts work like this: take every open deal, multiply by its stage probability, sum the result, and call that the number. It feels rigorous. It's actually a fantasy.

The math is fine. The inputs are broken.

Stage probabilities are an aggregate over thousands of historical deals. The deal in front of you is not a thousand deals. It's one deal, with one champion, in one industry, with one specific reason it might not close. Aggregating away that detail is exactly what a forecast should not do.

If you can't write down why the deal will close, the probability is fiction.

What makes this worse: most CRMs reward reps for advancing stage, not for committing. So pipeline grows, weighted forecast looks healthy, and reality lands 30% lower because half the late-stage deals never had a real path to signature.

What to use instead

We've watched ~120 revenue teams move from pipeline-weighted to a model we now call committed + best-case + AI-weighted. Three numbers, generated three different ways, reconciled weekly:

  • Committed — what the rep verbally commits to. No probability math. Just: will this close?
  • Best case — what could land if the planets align. Reps are rewarded for accuracy here, not optimism.
  • AI-weighted — Tracket's model looking at engagement, deal-shape, customer fit and historical patterns. No human input.

The committed line is your floor. The AI line is your ceiling. The gap between them is the conversation. When committed and AI agree, you ship. When they don't, you're either sandbagging or about to get a surprise — both worth knowing.

What we measured

Across the 120 teams we observed for at least a quarter, median forecast accuracy went from 71% (pipeline-weighted) to 91% (the three-line model). Variance to plan dropped by 6 points. The committed line beat plan in 68% of quarters; in the 32% it missed, leadership had a 14-day head-start to react.

None of this is novel. It's just rigorous. The trick is to commit to the discipline of three lines, weekly, with reps' compensation tied to accuracy — not optimism.

If you take one thing from this

Stop weighting your pipeline. The number you compute is precise but wrong. Replace it with a discipline that asks reps to be accurate, then reward them for being accurate. The forecast becomes a tool. The CFO stops sweating. The CRO stops apologizing.

— TRACKET CRM SHIPS THIS MODEL OUT-OF-THE-BOX. SEE FORECASTING →

Keep reading

More from the Journal.

All posts