Total addressable market—TAM—is one of the first analytical exercises any B2B business undertakes, prompted by the question: how large is the universe of potential revenue available if the business could sell to everyone who might plausibly want what the business has to offer? This estimate is almost always computed top-down, starting with the team’s opportunity-sizing of the overall market, then applying filters for geography, segment, and product scope to arrive at the relevant slice. The result is a single large number: a quality check for early strategic decisions and a framing device for investors who want to know if the market is large enough to even justify this business.
But TAM in this form is a monolithic construct. It produces one number, or at best a few numbers by segment, and it says nothing about the individual accounts (customers and prospects) that collectively make up that market. When we see a statistic like this, say $90 billion total addressable market in North and South America, we might want to know which individual accounts make up that $90 billion. Or, how the $90 billion is split across the accounts. Or which industries have the deepest proven adoption. Or, maybe, which territories have the most balanced potential across customers and prospects. Perhaps if we can split the TAM by sales reps and account managers, and if so, how we would go about that.
These were the questions I had to answer in my work with go-to-market (GTM) teams who needed a way to allocate customers and prospects across sales reps and account managers in a mathematically rigorous, optimizable way. To build a GTM-centric optimization model for territory assignments, I needed a single, comparable measure of revenue potential for every account out there. Not a rough estimate based on firmographics, not an arbitrary score, but a number that meant the same thing across industries and company sizes—a number grounded in actual observed behavior. That number, as it turned out, was TAM, just a different flavor of it; one built from scratch, account by account, using the customer behavioral data accumulated from years of operating the business. I called it the Modular TAM.
What is Modular TAM?
Modular TAM is a framework for computing a defensible revenue potential for every account in the market—customers and prospects alike—using benchmarks derived from the company’s own customer data. It is calibrated specifically to each company’s product, observed penetration patterns, and historical pricing, which is precisely what makes it defensible and precisely why it requires real data to build.
It’s modular because of its structure: four independent components, each answering a distinct question about the account, each calibrated separately. We can update the penetration benchmark without touching the pricing benchmark, swap the data cut from one feature to another without rebuilding the formula, and so on. The model evolves as the data grows.
Each account gets its own TAM value, grounded in the behavior of comparable accounts in the customer base, and those values can be rolled up however needed—by industry, region, territory, customer versus prospect—to produce a bottom-up market view where every dollar traces back to a specific account and a specific set of benchmarked assumptions. That traceability is what makes the framework useful for territory assignment optimization: when every account carries a comparable, modeled dollar value, territory assignment becomes a proper optimization problem with real inputs.
What we need to build this
The framework requires four categories of data for the company’s customers and prospects: firmographics (employee count, industry, revenue, and so on, available from vendors like ZoomInfo or from public filings); product usage data, specifically active users per account from the company’s product database; financial data, meaning revenue per account from the billing system; and growth rates, from data vendors or historical snapshots.
The quality and scope of the customer and prospect data matters enormously. Two to three years of customer history across a few dozen accounts in multiple industries is the practical minimum for computing benchmarks that are worth trusting. The more accounts, the more years, and the more industry diversity in that base, the tighter and more defensible the benchmarks become. Notice the implication—this is not a methodology to use before that foundation exists. But once it does, the methodology is fully computable from data that most mature B2B businesses already have sitting in their systems.
Modular TAM formula
The core equation has four components, multiplied together:

Each component answers a different question about the account. Addressable users answers how many people at this account could realistically use the company’s product. Penetration rate, expressed as percentage, answers what fraction of those addressable users will actually be provisioned, benchmarked by industry (or other meaningful feature) from the company’s customer data. Revenue per user per year answers what the company will earn annually per provisioned user, also benchmarked by industry (or other meaningful feature). And Projected years (T) sets the time horizon over which we want to size the opportunity.
Addressable users: who can use the company’s product?

If the company’s product is something that anyone in any role could use—productivity software, payroll tools, company-wide learning platform—then for addressable users we can simply use the account’s total employee count from the firmographics data. If it serves a more specific function, like a CRM for salespeople or an IDE for engineers, then addressable users is that team’s headcount, which we either source from a data vendor or estimate as a percentage of total employees based on industry norms.
The growth projection is compound: today’s addressable users multiplied by (1 + annual growth rate) raised to the power of T, where the growth rate is typically proxied by headcount growth, publicly available for large companies and estimable from data vendors for smaller ones.
This is the one component of the framework that does not require customer data. We can compute addressable users for a prospect the company has never sold to just as easily as for an existing customer. The next two components are fundamentally different—they require a benchmarking step that starts from the company’s own customer base, which is exactly why the two-to-three year data prerequisite matters.
Benchmarks and the generalized mean
Before turning to penetration and pricing individually, it is worth understanding the mathematical tool that powers both of them, because both benchmarks use the same formula—the generalized mean—just with different values of a single parameter.
The generalized mean is a family of averages, controlled by an exponent p:

When p = 1, we get the arithmetic mean, the familiar average that treats all observations equally. When p = 3, which we can call the cubic mean, the formula raises each value to the third power before averaging, then takes the cube root of the result. This amplifies larger values before the average is computed, producing a benchmark that leans toward the high end of observed values. When p = ⅓, the cube root mean, the opposite happens: smaller values get amplified, and the benchmark leans toward the conservative end of the distribution.
The insight that makes this useful here is that the choice of p encodes a strategic stance directly into the mathematics. For penetration, we want to be ambitious. The question the benchmark should answer is: given what we have actually observed in accounts comparable to this one, how deep could penetration realistically go? The cubic mean (p = 3) answers this by giving more weight to the accounts in our customer base where deep adoption has already been proven. It does not extrapolate beyond observed data; it simply rewards the signal that already exists.
For pricing, we want to be conservative. The question is different: not what have our best accounts ever paid, but what will this account actually pay? The cube root mean (p = ⅓) pulls the benchmark toward the lower end of observed pricing, protecting the model from being inflated by a handful of enterprise accounts that pay premium rates most accounts will never match.
Ambitious on penetration. Conservative on pricing. Same formula, different exponent.
Penetration rate: how deep could this customer or prospect go?
The benchmarking process for penetration has two steps. First, we compute the current penetration rate for each existing customer account: all-time active users divided by addressable users today, multiplied by 100. This gives us one percentage per account—a concrete, observed measure of how deeply the company’s product has been adopted.

Second, we benchmark by the data cut that best differentiates penetration rates across our customer base. We test candidate attributes—industry, region, ownership, business model, and so on— and select the one along which penetration varies most. If healthcare accounts consistently penetrate at 18% and technology accounts at 6%, industry is doing real explanatory work and becomes our cut. Within each group, we compute the cubic mean of the individual account penetration rates, arriving at one benchmark per group, a number that leans toward the accounts where deep adoption has been proven, but remains entirely grounded in observed behavior.
When we encounter any account, be it a prospect the company has never sold to or an existing customer we are sizing for expansion, we look up their group, retrieve the benchmark, and plug it in. The benchmark answers a specific question: given accounts like this one in our customer base, how deep could penetration realistically go?
Revenue per user per year: what will they actually pay?
The pricing benchmark follows the same structure, with the opposite exponent.

First, we calculate revenue per user for each customer account. There are two reasonable approaches: the all-time view, which divides total revenue ever earned from the account by total users ever active, and the trailing-12-month view, which uses recent revenue and recent active users. The all-time view smooths out fluctuations; the trailing view reflects current pricing more accurately. Either works; the right choice depends on how much the company’s pricing has evolved over the period in question.
Second, we benchmark via cube root mean (p = ⅓) using the same data cut we identified for penetration. Within each group, the cube root mean of the per-user revenue values pulls the benchmark toward the lower end of observed pricing, creating a conservative counterpart to the ambitious penetration benchmark.
Bounding the TAM: ceiling and floor
Even with carefully benchmarked inputs, the unbounded formula can produce numbers that are not grounded in financial reality. A Fortune 500 company with 200,000 employees might produce an Unbounded TAM of $50 million—technically consistent with the math—but that number is meaningless if the buying department’s entire annual budget is $3 million.

The ceiling is derived from the account’s financials, working downward in two steps. We estimate the relevant department’s budget as a percentage of total company revenue, and then estimate the product category’s share (of our company) of that department budget. Multiplying both percentages by the account’s cumulative revenue over T years gives us the maximum the account would realistically spend on a product like ours. The cumulative revenue calculation matters here—a company generating $1 billion today at 10% annual growth produces substantially more cumulative revenue over five years than a flat extrapolation of 5 × $1 billion would suggest.
The floor is simpler: β multiplied by the account’s current annual revenue, where a reasonable β value is 2. If the model produces a TAM lower than what an account is already paying us, something has gone wrong in the inputs and the floor catches it. Notice that it works for prospect accounts as well; because their annual revenue is $0 (the company hasn’t signed them yet), the floor is also $0.
The final TAM for each account is therefore:
Final TAM = max ( Floor, min ( Unbounded TAM, Ceiling ) )
Bounded below by what we have already proven the account will spend, and above by what the account can realistically afford.
What Modular Total Addressable Market enables
With a modeled dollar value attached to every account, from customers to prospects, several things become possible that a market-level TAM cannot support.
Account prioritization becomes data-driven rather than instinct-driven: we can rank prospects by Modular TAM and direct sales development effort toward the accounts with the largest modeled opportunity, not just the most recognizable logos.
Whitespace analysis, for existing customers, becomes concrete. The gap between an account’s current annual revenue and its Modular TAM is the expansion opportunity, and that gap is invisible without this framework. An account paying $200K with a TAM of $2M has ten times the whitespace of one paying $200K with a TAM of $300K, but in a standard GTM analysis, those two accounts are completely indistinguishable.
Territory design, as noted at the outset, is where Modular TAM was built to operate. When every account has a comparable dollar value, territory assignment becomes an optimization problem in the formal sense: balance territories by total TAM, ensure each sales rep or account manager carries a comparable opportunity—or place more skilled people against higher-TAM books of business—while controlling for geography and segment. That optimization is tractable precisely because the inputs are consistent and at account-level.
And forecast calibration becomes possible in a new way: by comparing pipeline projections against TAM-derived ceilings, we can surface cases where a forecast exceeds what the model says is plausible for a given account and prompt the right conversations before it’s too late.























































































