Every year, the same ritual plays out inside B2B go-to-market organizations. Sales leadership gathers and they forecast the coming fiscal year’s revenue. The process is straightforward: each account manager looks at their book of business, mentally assesses the health of each deal, and assigns a probability that the deal will close—often, though not always, based on which stage it is in. The numbers then trickle upward to region-level views and eventually land on the desk of the Chief Revenue Officer as a single, confident figure.

These forecasts are not insincere. Account managers have context that no dataset can replicate. They’ve sat across from the buyer, they’ve read the room, they know that the VP of Procurement at one account just got promoted or that another account’s finance team has been ghosting them since October. This intelligence is invaluable.
But when an account manager (or, any of us for that matter) says a deal will close at some probability, they are making a point estimate—a single scenario in which the world behaves exactly as they expect. When every account manager does this simultaneously, and the numbers are summed, the result is a forecast that assumes every account manager’s best guess is correct at the same time. This almost never happens. Some will be right, others will overshoot, and a few will be blindsided by churn no one saw coming. The forecast, in other words, is not really a forecast. It’s a single scenario.
This is the gap I set out to close during my time working with go-to-market executives. What follows is a framework for bridging the distance between a believable forecast and a probable one. I call it Two-Lens Forecasting. The core idea is simple: instead of choosing between the team’s human forecast and a data-driven model, we use both—one lens from forward-looking, on-the-ground intelligence and one from historical, probabilistic intelligence. Neither lens alone gives us the full picture, but together, they lock in a more precise target.
The individual methods we’ll talk about are not new. Monte Carlo simulation has been used in finance and engineering for decades. Beta distributions are a staple of Bayesian statistics. Weighted-pipeline forecasting is standard practice in most go-to-market organizations. What I believe is valuable is the specific combination: a dual-methodology simulation engine (one approach for renewals, a different one for new business and expansions) paired with a visual deep-dive framework (which we will call “pill charts”) that makes the output immediately useful to executives.
Lens one: The weighted-pipeline forecast
To make this concrete, consider a B2B company doing a revenue forecast for the coming fiscal year. The CRO has been given these numbers—$200 million of revenue is in active pipeline and the team believes they can land $155 million: $80 million from renewals, $38 million from new business, and $37 million from expansions. These totals are based on region-level forecasts derived with a weighted-pipeline approach:

Overall, the forecast’s message is this: there is $200 million of revenue that could be locked in and the team thinks they can close 78% of it. Sounds reasonable. But what if North America renewals all slip at the same time? That’s $30 million gone down the drain. What if deals in Europe, Asia, and South America are all won but all deals in North America are lost? That’s a total of $100 million won, a far cry from $155 million. Our gut instinct should tell us that such extremes are less likely to happen but that they aren’t impossible. It should also make us wonder about the reverse: $155 million is possible but how likely is this possibility? This is where weighted-pipeline forecast falls short and where we need to use a Monte Carlo simulation.
Lens two: Monte Carlo simulation
The core idea of Monte Carlo is disarmingly simple: instead of computing a single expected outcome, we simulate thousands of possible outcomes and look at the distribution of results.
Say we run this simulation across the entire $200 million pipeline—renewals, new business, and expansions combined—and get the following distribution:

The model’s most likely outcome is $107 million, not $155 million. The 80% confidence interval runs from $85 million (downside—things went badly but not catastrophically) to $130 million (upside—nearly everything went our way). The team’s $155 million sits in the top 5% of simulated outcomes. That doesn’t mean it’s impossible but we now know something we didn’t with the weighted-pipeline approach: $155 million requires strong performance across renewals and new business and expansions simultaneously. Barring exceptional luck, that outcome is unlikely. We need to pressure-test what’s making the (human) weighted-pipeline forecast so optimistic.
How the simulation works:
Renewals
Before we do that, let’s first address how we set up the Monte Carlo simulation correctly. We need to treat the three deal types—renewals, new business, and expansions—differently because they behave differently.
Renewals have a concrete plan base: the dollar amount up for renewal. The question is what percentage of that plan base the customer will actually renew. The most honest approach is to let history speak directly.
Say we pull 1,800+ historical renewal opportunities across two fiscal years and plot the distribution of renewal win rates. What we’d likely see is striking and bimodal: most deals either churned completely (0% of plan dollars renewed) or renewed fully (90–100% of plan dollars renewed), with a thin scattering of partial renewals in between. This becomes the empirical distribution Monte Carlo draws from: the actual, observed renewal rates from history.

But not all renewals are the same. Let’s say we test several stratification schemes and find that two variables emerge as the strongest predictors of renewal rates: region and deal size. Region sorted into three tiers—high-win, medium-win, and low-win—based on historical aggregate renewal rates. Deal size sorted into three buckets: small (under $20K), medium ($20K–$500K), and large ($500K+). Crossing these two dimensions produces a 3Ă—3 grid of nine profiles, each with its own empirical distribution (histograms) of renewal rates.

For each renewal deal in the current fiscal year’s pipeline, the simulation proceeds as follows:
- Identify the deal’s profile (its region tier Ă— deal size tier).
- Sample a renewal rate at random from the empirical distribution of historical renewal rates for that profile.
- Multiply that sampled rate by the deal’s plan renewal amount to get the simulated renewal revenue.
- Repeat for every deal in the portfolio.
- Sum across all deals for one simulated total.
- Repeat the simulation ten thousand times.
How the simulation works:
New Business and Expansions
Next come the new business and expansion deals. These usually don’t have a plan base to renew against. The relevant question is not “what percentage of an existing contract will be retained?” but “will this deal close, and if so, at what amount?”
This is where stage-based conversion rates come back into the picture, but with a probabilistic upgrade. Instead of taking the historical conversion rate as the only option, we can model each deal’s close probability using a Beta distribution, parameterized by the observed successes and failures at each stage.
Let’s say one of the deals we’re interested in is currently in early-pipeline stage, associated with a 40% win rate. The question we should be asking ourselves is 40% of how many historical deals? 40 out of 100? Or 4 out of 10? Intuitively, we would have more confidence in the former.
The Beta distribution helps us encode this. If we’ve observed 40 wins and 60 losses at this stage historically, the Beta distribution would say something along the lines of, “The conversion rate is probably around 40%, but given our sample size, it could plausibly be anywhere from 32% to 48%.” If we’ve observed 4 wins and 6 losses, the Beta distribution would widen this interval: the conversion rate could be anywhere from 15% to 68%.

For each new business or expansion deal, the simulation works as follows:
- Identify the deal’s current stage.
- Sample a win probability from the Beta distribution associated with that stage.
- Simulate a binary outcome: the deal either closes (with probability = sampled rate) or doesn’t.
- If it closes, use the deal’s forecasted amount as the revenue contribution.
- Repeat for every deal.
- Sum across all deals for one simulated total.
- Repeat ten thousand times.
This method captures something the weighted-pipeline revenue forecast completely misses: a portfolio of early-stage deals with thin historical data is fundamentally less predictable than a portfolio of late-stage deals with deep historical data, even if their weighted-pipeline values are identical. The Beta distribution’s sensitivity to sample size makes this explicit.
Two-lens forecasting:
The pill chart
Let’s go back to pressure-testing. Monte Carlo told us the most likely outcome is $107 million of revenue closed, and specifically that there is 80% probability the actual value will land somewhere between $85 million and $130 million. The team, on the other hand, said they think they can close $155 million. Something is off but we still don’t know why.
This is where we can visualize each forecasting lens via a “pill chart,” broken down by whichever data cut represents how the business runs its sales engine—in our case, by region.

When the vertical line (weighted-pipeline forecast) falls inside the pill (Monte Carlo forecast), the two approaches are aligned—the team’s projection is within the range that historical data suggests is plausible. When it falls outside the pill, a conversation is needed: what does the team know that the model doesn’t? Where exactly is the team’s assumption diverging from the model’s?
This framing is essential. The point is never to tell a sales leader that their forecast is wrong. The model’s output is the rearview mirror; telling us what would happen if the future looked like the past. The team’s forecast is the windshield; informed by on-the-ground intelligence about buyers, deals, and market conditions.
In our case, Europe and Asia look good: the past and the future align. But North America warrants further investigation and South America looks unreasonably optimistic—the model’s best-case scenario is $16 million and the weighted-pipeline forecast is $30 million. To understand this, we double-click into the three deal types for these two regions and make another pill chart:

This decomposition surfaces insights that neither approach can produce alone. In North America, the renewals are on track; the team’s $30 million sits comfortably inside the model’s range. The gap is cleanly isolated to new business and expansions, where the team seems to have conviction that net new revenue (from new logos and expanded customers) will convert at higher rates than it has historically. Go-to-market executives now know they need to talk to account managers who cover North America and pressure-test their assumptions—are they certain of their conviction? Why? If they can explain, great. If not, we need to scale back the forecast.
South America tells a different and more concerning story. Three problems at once: the team’s renewals forecast of $14 million is nearly double the model’s best-case of $8 million. The new business forecast of $5 million is actually below what the model suggests is likely—the team may be leaving money on the table. And the expansions forecast of $11 million exceeds the model’s range of $5 million to $8 million. The executives now know the South America team needs to go back to the drawing board; either the team is seeing a profound behavioral shift in this region or the on-the-ground intelligence is scattered. If the former, the team might need to rethink the entire sales motion in this part of the world. If the latter, the executives might need to have tough conversations with the team.
Either way, the pill charts tell a clear story:
- Unless we have defensible conviction that new business and expansions deals will perform better in North America than they have historically and that a profound behavioral shift is happening in the South America market, the $155 million target is unattainable.
- If we have evidence in one but not the other, then a more realistic yet still ambitious target is closer to the 75th percentile of the Monte Carlo distribution—around $130 million.
- And if we have no defensible conviction for either—in other words, no evidence that future will be different from the past—it’s best to stick to $107 million (the median) as the target.
Limitations
It would be a disservice to present this framework without addressing its blind spots. It has two, and naming them matters—a forecasting tool whose limitations we don’t understand is more dangerous than no tool at all.
The first is correlation. The Monte Carlo simulation treats each deal as an independent coin flip, but in reality, macroeconomic shocks, competitor moves, and budget freezes sweep through entire segments at once. The consequence is that the model’s confidence intervals are likely too narrow. The true tails are wider than the simulation suggests. Bridging the simulation with the weighted-pipeline forecast partially compensates for this: sales leaders who sense that “the whole market is tightening” are implicitly injecting correlation, even if they’d never use that word.
The second is thin cells in the stratification grid. Not all nine cells in the 3Ă—3 matrix have equal historical depth. A corner cell might contain a few dozen data points; enough to sample from, but noisy enough that another year of data could meaningfully shift the distribution. Using the empirical distribution directly (rather than fitting a parametric model) avoids imposing false structure, but the honest answer is that some cells deserve an extra grain of salt.
The philosophical point
The two lenses represent two fundamentally different epistemologies. The human forecast starts from what the team believes will happen and works backward to a number. The Monte Carlo forecast starts from what has happened and works forward to a range. One is rooted in the future, the other in the past. Neither is right in isolation. A forecast built entirely on the past would ignore the intelligence salespeople accumulate through months of relationship-building, while one built entirely on the future would ignore the base rates that tend to reassert themselves. That’s what Two-Lens Forecasting is really about—not choosing between the past and the future, but insisting on both.