FintastIQ
Book a Consultation

Pricing / packaging tiering

Packaging Tiers Designed with Data — Not Instinct

· 2025-03-06

At some point in your company's history, someone decided that feature X belonged in the "Better" tier and feature Y belonged in "Best." Do you know who made that decision? Do you know what evidence they used? For most SaaS companies, the answer to both questions is no. Pricing decisions made in year one based on a napkin sketch are still governing revenue in year four.

Guessing is not a sin. But continuing to guess once you have the data to do better is expensive.

The P&L Impact

Instinct-based packaging creates a particular kind of margin leak: slow, invisible, and systematically attributed to the wrong cause.

When a sales rep discounts a deal by 22%, the instinct is to blame the rep. In reality, 22% discounting on a consistent basis is evidence that your tiers are not priced to the value your buyers recognize. The rep is discounting to close the gap between what you say the product is worth and what the buyer experiences it as worth. That gap is a packaging problem, not a sales problem.

When a customer churns after eight months in your "Good" tier, the instinct is to blame the product or customer success. In reality, early churn in a specific tier is almost always a feature-fit problem. The buyer was sold on a job the "Good" tier does not fully complete. They tried to make it work for eight months and gave up. That is a packaging problem, not a product problem.

A $15M annual recurring revenue (ARR) SaaS company running instinct-based packaging will typically have 18-24% average discount rates and 15-22% annual churn in at least one tier. Together those two numbers represent a 35-40% gap between the company's theoretical ARR and what they actually collect. The gap is not inevitable. It is the cost of guessing.

How to Work the Problem

Moving from instinct to evidence takes three deliberate steps.

Step 1: Run a feature usage audit. Pull your product analytics and segment feature usage by tier and by account health score. For each feature in each tier, answer two questions: What percentage of accounts in this tier have used this feature at least once in the past 90 days? What is the difference in net revenue retention between accounts that have adopted this feature and those that have not?

Features with low adoption in a tier and low net revenue retention (NRR) correlation are either in the wrong tier or are features that do not drive value. Both are fixable. Features with high adoption and high NRR correlation are your tier's core value drivers. Your pricing page should lead with them, not bury them in a feature comparison table.

Step 2: Map your pocket price waterfall. Your list price is fiction. Your pocket price, after discounts, credits, extended terms, and bundled add-ons, is reality. Pull every closed deal from the past 12 months and calculate the actual realized price by tier. You will almost certainly find that your "Better" tier and "Best" tier overlap significantly in realized price, meaning buyers get "Best" value at "Better" prices through negotiation.

That overlap tells you exactly where your deal desk governance is failing and where your tier differentiation needs to be sharper.

Step 3: Conduct eight win/loss interviews. Four with buyers who chose your "Best" tier and four with buyers who chose a competitor over your "Best" tier. Ask one question in each interview: what specifically justified the price difference between your tiers in your mind? The answers will tell you more about your tier perception than any amount of internal analysis.

Where Teams Get Stuck

A $22M ARR marketing automation company had not updated their pricing tiers in three years. Their CEO was confident the tiers were fine because their sales team had never explicitly complained about them.

What the data showed was different. A feature usage audit revealed that 67% of "Good" tier accounts were not using the primary feature that justified the "Good" price point. The feature was in the wrong tier: it required a workflow maturity level that "Good" buyers did not have yet. Meanwhile, it was the most-cited reason "Better" and "Best" buyers renewed.

Moving that feature from "Good" to "Better" reduced "Good" tier churn from 31% to 18% in 90 days because the buyers who remained in "Good" were now the segment actually capable of using the tier. "Better" tier conversion increased 9% because the feature added genuine pull.

The company had three years of data sitting in their product analytics that would have shown this clearly. Nobody had looked at it through a tier lens.

Priorities for the Week

Pull your feature usage report for the past 90 days and segment it by tier. You are looking for two things: features with adoption below 30% within a tier, and features where high-adoption accounts have meaningfully better NRR than low-adoption accounts.

Those two signals are your starting point for a data-driven tier redesign. If you do not have product analytics set up to answer these questions, that is your actual first action item.

Take the free packaging diagnostic at assess.fintastiq.com to see how your tier structure compares to patterns across similar-stage SaaS companies.

For the complementary view on what the data should tell you about ROI, read How to Measure the ROI of SaaS Pricing Tiers. For the failure patterns that emerge when instinct drives packaging decisions too long, read The Failure Case of B2B SaaS Packaging.

Frequently Asked Questions

What data should you use to design SaaS pricing tiers?
Use four data sources: product usage analytics by customer segment, pocket price data from closed deals in your CRM, net revenue retention by cohort segmented by tier, and win/loss interview data from the past six months. These four together give you a complete picture of whether your current tiers reflect actual buyer behavior.
How do you know if your SaaS pricing is driven by data or instinct?
If you cannot point to a specific data source that justifies each tier boundary and each feature placement, your pricing is based on instinct. The most reliable test: ask your sales team why a feature is in a specific tier. If the answer is 'that is just how it has always been,' you have identified a gap.

Find out where your commercial gaps are.

Take the Free Assessment →