FintastIQ
Book a Consultation

The NRR Benchmark: Retention Math Across B2B, Marketplace, and Subscription Models

How to define, measure, and benchmark retention correctly when your commercial archetype is not pure B2B SaaS, and what to stop doing when a borrowed benchmark tells the wrong story.

The best operators compete on discipline, not instinct.FintastIQ · House View

The NRR Benchmark: Retention Math Across B2B, Marketplace, and Subscription Models

Most retention benchmarks were built for pure B2B SaaS. Most operators running recurring revenue are not. When the math and the model drift apart, the benchmark becomes a decoy. Rebuild measurement from the archetype, not the industry average.

TL;DR

  • NRR means different things in B2B SaaS, marketplaces, subscription services, and hardware-plus-consumables. One definition across all four produces wrong targets.
  • Four retention lenses matter: logo, gross revenue, net revenue, and cohort value. Operators who report only one are hiding the other three.
  • Quarter-blob averages mask cohort behaviour. Retention is a cohort metric.
  • Benchmark against archetype peers. A two-sided marketplace is not comparable to a seat-based SaaS vendor, no matter how similar the revenue number looks.
  • Expansion math papers over contraction when the two are not split. Report gross and net separately, every time.

The problem with borrowed benchmarks

Kenji runs revenue at Keel and Compass Exchange, a B2B2B freight-tech marketplace that matches 14,000 independent owner-operator truckers with 2,300 shippers and brokers. The business is 200 people, $180M annualized platform revenue, $1.1B of GMV. 38% is take-rate on freight moved. 62% is recurring SaaS fees for dispatch, billing, and compliance tools.

Kenji's board asked for an NRR number. He produced 118%. Then a director asked how that compared to peers. SaaS reports quoted medians around 108% for mid-market vendors. Marketplace reports quoted GMV retention around 75% per cohort. Services reports quoted recurring-fee retention net of scope changes. None of the three matched what Keel and Compass actually was.

When the team rebuilt the math from the archetype up, a different picture emerged. Gross revenue retention across carriers: 84%. Net revenue retention across shippers: 112%. GMV retention per cohort: 71%. Logo retention in shippers spending more than $500K annually: 94%. Logo retention in carriers spending less than $50K annually: 48%. Expansion revenue from cross-module attach: 18%. Net contraction in the bottom-quartile carrier cohort: 7%. Median tenure per cohort: 21 months.

The 118% headline was the average of a two-sided business with five distinct retention curves. It was not wrong in arithmetic. It was wrong as a signal. Pricing is a signal before it is a number, and retention math is the health signal underneath the price.

The four-part framework

1. Define retention for your archetype before you benchmark

The first mistake is defining retention generically. The second is benchmarking before you have defined it. Write it down, explicitly.

For B2B SaaS, the definition is ARR retained from a contracted cohort over twelve trailing months, excluding new logos from the numerator, excluding one-time revenue, and reporting gross and net separately. For a marketplace, retention splits into GMV retention per side, take-rate stability, and platform-fee retention for any subscription layer. For recurring services, the definition is recurring-fee revenue net of scope changes, with project or implementation revenue kept out of the numerator. For hardware-plus-consumables, the primary metric is reorder rate at twelve and twenty-four months.

Keel and Compass ended up with four definitions in parallel. Platform-fee NRR across shippers. Platform-fee NRR across carriers. GMV retention per carrier cohort. GMV retention per shipper cohort. Each answers a different operating question. The discipline is keeping them separate, not collapsing them because the board is impatient.

Confusion is the enemy of willingness to pay, and confusion about retention produces muddled pricing. If leadership cannot agree on what number they are defending, they defend it by discounting.

2. Separate the four retention lenses

Every recurring-revenue business should report four distinct retention numbers. Logo retention, gross revenue retention, net revenue retention, and cohort value retention.

Logo retention counts customers. It says nothing about revenue weight. Keel and Compass saw 94% logo retention among large shippers and 48% among small carriers. The investment decision follows from the combination, not either number alone.

Gross revenue retention measures revenue kept from a starting cohort, with no credit for expansion. It is the cleanest product-holds-its-value signal. Keel and Compass ran 84% GRR across carriers, which is below the platform peer median and points at stickiness gaps in the carrier-side tooling.

Net revenue retention adds expansion. Keel and Compass ran 112% NRR on the shipper side, driven by 18% cross-module attach. That is a healthy number, but only because shipper gross retention was 91%. If gross had been 78%, the 112% net would have been hiding real contraction.

Cohort value retention tracks the dollar value of a specific cohort over its lifetime, normalized to its starting value. It maps most directly to LTV and to the valuation multiple at exit. Run it on every cohort at twelve, twenty-four, and thirty-six months.

3. Run retention per cohort, not per quarter blob

A quarterly NRR average is a weighted mixture of cohorts at different tenures. It moves with new-logo volume, expansion timing, and churn timing. You cannot operate from it.

Retention is a cohort metric. Define the cohort by signup month or quarter. Compare cohort N at month twelve to cohort N-1 at month twelve. That tells you whether onboarding is improving, whether the product is stickier, whether pricing is holding.

Keel and Compass discovered the carrier cohort from the quarter they rolled out a new dispatch UI had a fourteen-point higher twelve-month GMV retention than the prior cohort. The blended NRR number never showed this. The cohort view made it obvious.

The best operators compete on discipline, not instinct. Cohort retention is a discipline. It takes longer to produce and it resists the temptation to tell a cleaner story than the data supports.

4. Benchmark against archetype peers, not industry averages

When the benchmark comes from outside the archetype, the target is noise. Use peer groups that share commercial mechanics. A two-sided marketplace benchmarks against other two-sided marketplaces. A B2B2B platform benchmarks against other platforms with the same two-layer structure. A recurring services business benchmarks against peers with comparable scope characteristics.

The shortcut of using a generic SaaS benchmark is tempting because those reports are public. The shortcut costs you correct targets. Keel and Compass bought a custom benchmark from a marketplace-focused analyst group. That benchmark put their 71% per-cohort GMV retention in the top third of freight-tech marketplaces, not the bottom half of SaaS vendors.

Pricing maturity is measured by what you stop doing. Stop cross-archetype benchmarking. Stop publishing a single blended NRR number. Stop defending targets you cannot map back to commercial mechanics.

Three failure modes

Archetype-blind benchmarking. The operator quotes a SaaS median against a marketplace, sets the wrong target, and drives the wrong interventions. Classic symptom: a marketplace CRO trying to "fix NRR" with customer success spend when the real issue is a supply-side GMV retention gap.

Logo retention vanity. The team reports 94% logo retention. Revenue is concentrated in twelve accounts. Losing two drops gross revenue retention by nineteen points. Logo retention was never the right headline. It was a supporting metric that got promoted because it flattered the team.

Expansion math papering over contraction. The blended NRR is 115%. Gross retention is 82%. Expansion is coming from four large accounts. The rest of the book is contracting. When those four plateau, the blended number collapses. Report gross and net separately and this becomes visible eighteen months before it becomes a crisis.

30-60-90 sprint

Days 1-30. Write the retention definition for your archetype. One page, circulated to the board. Include the denominator, the cohort window, and the gross-net split. Pull the last eight quarters and rebuild the four retention lenses from raw transactions. If you cannot rebuild it, you cannot defend it.

Days 31-60. Stand up cohort reporting by month of acquisition. Track forward at months 3, 6, 12, 18, 24. Produce a cohort heatmap leadership reads weekly. Retire the quarter-blob NRR number from internal operating reviews.

Days 61-90. Buy or build an archetype-specific benchmark. Compare rebuilt lenses to the peer set. Identify the one cohort and lens furthest from median. Put a named owner on it with a ninety-day target. One lens, one owner, one target.

Frequently Asked Questions

Why do B2B SaaS NRR benchmarks mislead operators of marketplaces and subscription services?

The math has different denominators. SaaS NRR measures ARR from a fixed cohort of contracted customers. Marketplace retention measures GMV from a fluid pool of buyers and sellers, where a single large participant can swing the number by ten points. Services retention measures recurring-fee revenue that may carry variable scope. When a marketplace operator benchmarks against a SaaS median of 110%, the target is not comparable to the mechanics of the business. The target has to be built from the archetype.

What is the difference between gross revenue retention and net revenue retention?

Gross revenue retention (GRR) measures what you kept from the starting cohort, excluding expansion. It cannot exceed 100%. Net revenue retention (NRR) adds expansion, so it can exceed 100%. GRR tells you whether the product holds. NRR tells you whether the account grows. Healthy businesses watch both. Reporting only NRR lets expansion from a few large accounts paper over contraction in the long tail, which is a common failure mode in mixed commercial models.

How should a marketplace define retention when buyers and sellers both churn?

Separate the two sides. Measure GMV retention per seller cohort and per buyer cohort. Then measure take-rate revenue retention, which is a function of both. At Keel and Compass, carrier GMV retention and shipper GMV retention move independently, so a single blended number hides the mechanics. The useful view is a two-by-two: gross and net retention on each side, benchmarked to peer marketplaces.

What cohort window makes retention math stable?

Twelve-month trailing for most archetypes. For hardware-plus-consumables with long reorder cycles, eighteen to twenty-four months. For consumer subscription brands with monthly churn, rolling three-month cohorts aggregated into twelve-month views. The quarter blob is the enemy. It masks the cohort you need to see, which is the customers who joined together and whose behaviour is comparable.

What retention number actually predicts enterprise value at exit?

For pure B2B SaaS, net revenue retention is the cleanest predictor. For marketplaces, it is GMV retention per cohort combined with take-rate stability. For subscription services, it is recurring-fee retention net of scope creep. For hardware-plus-consumables, it is reorder rate at the twelve and twenty-four month marks. A buyer pays a multiple for durable revenue. The definition of durable depends on what the customer is buying.

How do you separate expansion revenue from price increases in NRR?

Tag every dollar of expansion at the point of invoice. Unit expansion (more seats, more volume) goes in one bucket. Module or cross-sell expansion goes in a second. Price-increase expansion goes in a third. If you cannot split these three, your NRR number mixes operational growth with pricing action. Boards that see only the blended number miss the signal that the business is buying expansion through price rather than usage.

What is the right retention target for a B2B2B distribution platform?

Build it from the two layers. The platform layer (what you charge the distributor or reseller) carries one retention curve. The end-customer layer (what the distributor charges their buyer) carries another. A healthy B2B2B platform has both above the archetype median with a clear expansion path at each layer. Borrowing a B2B SaaS 115% target for the platform layer ignores the end-customer retention that drives the distributor's willingness to keep paying you.

When does logo retention mislead and when does it matter?

Logo retention misleads when ACVs are highly skewed, which is true in most marketplace and B2B2B models. Losing 200 small carriers and keeping 10 large shippers can produce 94% logo retention on the shipper side and 48% on the carrier side, with very different revenue consequences. Logo retention matters when cohorts are homogeneous and the strategic goal is breadth of footprint rather than revenue concentration. Use it as a supporting metric, not the headline.

Run the free assessment or book a consultation to apply this framework to your specific situation.

Questions, answered

8 Questions
01

Why do B2B SaaS NRR benchmarks mislead operators of marketplaces and subscription services?

Because the math has different denominators. SaaS NRR measures ARR from a fixed cohort of contracted customers. Marketplace retention measures GMV from a fluid pool of buyers and sellers, where a single large participant can swing the number by ten points. Services retention measures recurring-fee revenue that may carry variable scope. When a marketplace operator benchmarks against a SaaS median of 110%, the target is not comparable to the mechanics of the business. The target has to be built from the archetype, not borrowed.

02

What is the difference between gross revenue retention and net revenue retention?

Gross revenue retention (GRR) measures what you kept from the starting cohort, excluding expansion. It cannot exceed 100%. Net revenue retention (NRR) adds expansion from that cohort, so it can exceed 100%. GRR tells you whether the product holds. NRR tells you whether the account grows. Healthy businesses watch both. Reporting only NRR lets expansion from a few large accounts paper over contraction in the long tail, which is a common failure mode in mixed commercial models.

03

How should a marketplace define retention when buyers and sellers both churn?

Separate the two sides. Measure GMV retention per seller cohort and per buyer cohort. Then measure take-rate revenue retention, which is a function of both. At Keel and Compass Exchange, carrier GMV retention and shipper GMV retention move independently, so a single blended number hides the mechanics. The useful view is a two-by-two: gross and net retention on each side, benchmarked to peer marketplaces with similar take-rate economics.

04

What cohort window makes retention math stable?

Twelve-month trailing for most archetypes. For hardware-plus-consumables with long reorder cycles, eighteen to twenty-four months. For consumer subscription brands with monthly churn, rolling three-month cohorts aggregated into twelve-month views. The quarter blob is the enemy. Quarterly NRR averages mask the cohort you need to see, which is the customers who joined together and whose behaviour is therefore comparable.

05

What retention number actually predicts enterprise value at exit?

For pure B2B SaaS, net revenue retention is the cleanest predictor. For marketplaces, it is GMV retention per cohort combined with take-rate stability. For subscription services, it is recurring-fee retention net of scope creep. For hardware-plus-consumables, it is reorder rate at the twelve and twenty-four month marks. A buyer pays a multiple for durable revenue. The definition of durable depends on what the customer is buying and how often they renew the purchase decision.

06

How do you separate expansion revenue from price increases in NRR?

Tag every dollar of expansion at the point of invoice. Unit expansion (more seats, more volume, more transactions) goes in one bucket. Module or cross-sell expansion goes in a second. Price-increase expansion goes in a third. If you cannot split these three, your NRR number is a single blended figure that mixes operational growth with pricing action. Boards that see only the blended number miss the signal that the business is buying expansion through price rather than usage.

07

What is the right retention target for a B2B2B distribution platform?

Build it from the two layers. The platform layer (what you charge the distributor or reseller) carries one retention curve. The end-customer layer (what the distributor charges their buyer) carries another. A healthy B2B2B platform has both above the archetype median and a clear expansion path at each layer. Borrowing a pure B2B SaaS 115% target for the platform layer alone ignores the end-customer retention that drives the distributor's willingness to keep paying you.

08

When does logo retention mislead and when does it matter?

Logo retention misleads when ACVs are highly skewed, which is true in most marketplace and B2B2B models. Losing 200 small carriers and keeping 10 large shippers can produce 94% logo retention on the shipper side and 48% on the carrier side, with very different revenue consequences. Logo retention matters when cohorts are homogeneous and when the strategic goal is breadth of footprint rather than revenue concentration. Use it as a supporting metric, not the headline.


How to define, measure, and benchmark retention correctly when your commercial archetype is not pure B2B SaaS, and what to stop doing when a borrowed benchmark tells the wrong story.


How relevant and useful is this article for you?


About the Author(s)

Emily EllisEmily Ellis is the Founder of FintastIQ. Emily has 20 years of experience leading pricing, value creation, and commercial transformation initiatives for PE portfolio companies and high-growth businesses. She has previous experience as a leader at McKinsey and BCG and is the Founder of FintastIQ and the Growth Operating System.


References
  • Aaron Ross & Jason Lemkin. From Impossible to Inevitable. Wiley, 2016
  • Rob Walling. The SaaS Playbook. SaaS Academy, 2023
  • OpenView Partners. SaaS Benchmarks Report. OpenView Partners, 2023
  • Bessemer Venture Partners. State of the Cloud. Bessemer Venture Partners, 2024
  • McKinsey & Company. Grow Fast or Die Slow. McKinsey Quarterly, 2014
More in 💰 PricingSee all 💰 pricing research →
Apply the framework

Ready to apply this to your business?

Take the free assessment. Thirty questions, ten minutes, a scored read-out of where your commercial system is tight and where it is leaking.

Take the free assessmentOr book a working session