FintastIQ
Book a Consultation

Pricing / willingness to pay

Sharpening Willingness-to-Pay Research Instincts

· 2026-01-27

When a B2B company wants to understand willingness to pay, the instinct is to ask customers. Survey them. Run a Van Westendorp. Interview the best accounts. All of that research produces data that points in the same direction and is reliably wrong in the same direction: it understates what customers would actually pay.

This isn't a methodological failure. It's a structural one. Customers who know you're conducting pricing research have every incentive to report lower numbers. There's no cost to anchoring low in a survey. There's a potential benefit if prices are influenced by the data. Your research is producing strategically useful numbers for your buyers, not accurate numbers for your pricing decisions.

The Revenue at Stake

Acting on survey-based WTP research costs you in two ways that are difficult to see from inside the research process.

First, you under-price durable segments. If your WTP research tells you your core ideal customer profile (ICP) will pay $500 per seat per month when the revealed WTP from their actual behavior is $680 to $750, and you price at $525 "with a small premium to survey data," you're leaving $175 to $225 per seat per month on the table permanently. In a $20M annual recurring revenue (ARR) business with 400 accounts averaging six seats each, that's $5M to $6.5M in annual ARR suppression from a single pricing decision.

Second, you create pricing anchors that are hard to dislodge. Once you publish a price point based on survey data, your customers treat that price point as the norm. Subsequent increases are measured against the published anchor. The further you are from the true market-clearing price at launch, the harder it is to recover the ground through incremental increases without disrupting the relationship.

The third cost is over-investment in the wrong features. When survey respondents list the features that would justify a higher price, they're also anchoring their answers to what they believe would sound reasonable, not what they'd actually upgrade for. You build the feature. The upgrade rate doesn't materialize. The product investment doesn't compound into pricing power.

The Working Model

Accurate WTP research starts from behavioral data, not survey data.

Step 1: Analyze your existing transaction data for revealed WTP signals. Pull your last 24 months of renewals. What percentage renewed without any price negotiation? At what price points does the negotiation rate increase sharply? The price point where 80% of customers renew without negotiation is your current floor WTP. The price where 40% begin to push back is your current ceiling. This range is more accurate than any survey.

Step 2: Run a controlled price test on new customer cohorts. Offer your standard offering at two price points to matched cohorts of new prospects. Track conversion, not just response. What prospects say they'd pay in a survey differs from what they actually sign at. Behavioral evidence from a price test, even a small one, is worth more than any amount of survey data.

Step 3: Use upgrade and add-on behavior as a continuous WTP signal. Customers who upgrade spontaneously, who add on features without being asked, and who expand usage without a dedicated upsell motion are demonstrating WTP above your current price. Track these behaviors at the segment level. Your highest-WTP segments are often not your largest accounts. They're your most outcome-oriented buyers.

Where the Plan Breaks

A learning management SaaS at $14M ARR ran a comprehensive WTP study before a pricing redesign. The study included 200 customer surveys, 15 in-depth interviews, and a Van Westendorp analysis. Results showed an acceptable price point of $320 per seat, up from the current $265.

They repriced to $310, comfortable they were within the research range. Twelve months later, a competitor entered at $380 per seat with comparable feature depth and was winning deals. Exit interviews from three lost competitive deals all mentioned price as a non-factor, the competitor's higher price was perceived as a signal of better quality, not a deterrent.

The actual WTP was significantly higher than the survey had suggested. They'd used survey data to price cautiously into a range that was below market clearing.

Before: WTP survey at $320 ceiling, repriced to $310, competitor at $380 winning on perceived quality.

After: They ran a behavioral analysis using renewal-without-negotiation data and an add-on attach analysis. Evidence pointed to a price ceiling of $420 to $480. They repriced to $399 with a clear value message. Conversion held. ARR grew from $14M to $21M over 18 months.

Steps for This Quarter

Pull your renewal data for the last 12 months. Calculate the percentage of renewals that proceeded without any negotiation, discount request, or pricing pushback at each price tier.

If more than 75% of your customer base is renewing without pushback, you have room to raise prices. The data is already in your systems. You don't need a survey to confirm it.

Take the FintastIQ pricing assessment to get a structured read on what your renewal data is telling you about pricing power.

You can also explore how WTP evidence connects to pricing architecture in Why Your Instincts Are Wrong About SaaS Pricing Strategy and how usage patterns reveal value in Why Your Instincts Are Wrong About Usage-Based Pricing Models.

Frequently Asked Questions

Why do customer surveys understate willingness to pay?
When you ask customers what they'd be willing to pay, they face no real consequence for their answer. The socially rational response is to state a number lower than their actual limit, because lower stated WTP creates negotiating room if prices do rise. Behavioral economics research consistently shows that stated WTP from surveys is 30% to 60% below revealed WTP from actual purchasing decisions.
What is revealed willingness to pay and how do you measure it?
Revealed WTP is what customers actually demonstrate through their behavior: the price at which they renewed without objection, the tier they chose when given a genuine choice, the features they paid to add on without being asked. You measure it by analyzing your own transaction data: renewal behavior, upgrade patterns, add-on attach rates, and the distribution of your pocket price at deal close.

Find out where your commercial gaps are.

Take the Free Assessment →