FintastIQ
Book a Consultation

Pricing / willingness to pay

What Rigorous Willingness-to-Pay Research Looks Like

· 2026-02-20

You ran a pricing survey. Ninety-two percent of respondents said they found your current price "reasonable" or "very reasonable." You raised prices 15% based on the survey results. Twenty-two percent of your next month's demos objected to pricing before you'd finished the discovery call. Your survey told you what buyers say. Your pipeline is telling you what buyers do.

The Financial Exposure

Bad willingness-to-pay research produces two categories of error with opposite commercial impacts. Overstated WTP leads you to set prices too high, which creates friction, extends deal cycles, and produces unnecessary discounting as reps work around the pricing resistance. Understated WTP leads you to set prices too low, which leaves sustainable margin on the table and, over time, attracts buyers who wouldn't pay more and trains your market to expect a certain price point.

The understatement error is far more common and far more expensive in B2B software. McKinsey research consistently finds that B2B software companies are underpriced relative to buyer willingness-to-pay by 10-25%, driven primarily by pricing research that relies on stated preference surveys and competitive benchmarking. Both methods consistently understate WTP for several reasons: survey respondents anchor to price without considering the specific outcomes the product delivers, and competitive benchmarks measure what others charge, not what buyers would pay for superior value.

For a $30M annual recurring revenue (ARR) business that's 15% underpriced relative to actual WTP, the cumulative revenue loss over a three-year period is roughly $15M. The fix doesn't require new products, new markets, or new customers. It requires research methodology that produces accurate numbers rather than reassuring ones.

The Playbook

Three research methods produce WTP data that's actually actionable.

Step 1: Run systematic deal debrief interviews. Your closed and lost deals are the most accurate WTP dataset available to you, and they're already paid for. Build a structured debrief protocol: for every closed deal, ask the buyer what alternatives they considered, what price points they explored, and what would have caused them to choose differently. For every lost deal, ask what they bought, at what price, and what would have had to be different for them to choose you. Twenty well-conducted deal debriefs will give you more accurate WTP signal than 200 email surveys.

Step 2: Use van Westendorp analysis with behavioral anchoring. The van Westendorp Price Sensitivity Meter asks buyers four price-point questions that map to "too cheap," "cheap," "expensive," and "too expensive" price perceptions. On its own, it's subject to the hypothetical overstating bias. With behavioral anchoring, tying each question to a specific scenario with quantified business outcomes rather than an abstract price, the methodology becomes significantly more accurate. Include specific outcome data in your scenarios: "at this price, you'd eliminate approximately X hours of manual process per week." Anchored scenarios reduce the hypothetical overstating gap by 40-60%.

Step 3: Run controlled price tests on new business. For a market segment where you have regular deal flow, testing two price points for 90 days with a holdout approach gives you behavioral WTP data that no survey can match. This works best when your deal source is inbound and your segment is reasonably homogeneous. If your win rate and average discount are statistically similar across two price points, your higher price point is your new floor. Most companies never run this test because it requires accepting temporary win rate uncertainty. The companies that do run it consistently find their floor is higher than their current price.

The Breakdown

A $16M ARR vertical SaaS company commissioned a pricing survey before a planned price increase. The survey found that 80% of respondents would find a 20% price increase "acceptable." The company implemented 18% price increases across the renewal base and on new business.

New business win rate dropped 14 percentage points in the following quarter. Renewal acceptance rate held at 88%, but the 12% that resisted had previously been the company's highest-expanding accounts. The CFO concluded that the survey had been wrong and rolled back prices on new business to the prior level.

A post-mortem revealed: the survey respondents were predominantly current customers who had self-selected to engage with a survey (positive selection bias). New prospects had no relationship-based goodwill cushioning the price perception. And the 20% WTP number from the survey had not been anchored to specific outcomes, it was a stated tolerance, not a behavioral one.

Before: Survey showing 80% acceptance of 20% increase, 18% increase implemented, 14-point win rate drop on new business.

After: After redesigning the WTP research using deal debrief interviews and anchored van Westendorp analysis, a segmented increase was implemented: 18% on renewal (where relationship goodwill supported it) and 7% on new business (where research showed a more conservative tolerance for new buyers). New business win rate recovered to pre-increase levels within two quarters.

The root cause was WTP research methodology that measured stated preference in a biased sample rather than behavioral tolerance in the full buyer population.

Your Week Ahead

Pull your last 10 lost deals. Contact the former prospects and ask one question: "What did you ultimately purchase, and what was the price?"

If the prices paid are consistently higher than what you quoted, you have an underpricing problem that your current WTP research isn't capturing. If they're consistently lower, you have a value communication problem. Both findings are actionable. Neither shows up in standard win/loss reporting.

Assess Your Commercial Health to identify the specific WTP research gaps affecting your current pricing decisions.

For how WTP research connects to pricing model design, see The Failure Case of Enterprise Software Pricing. For the usage-based pricing dimension of WTP research, read The Failure Case of Usage-Based Pricing Models.

Frequently Asked Questions

Why is most willingness-to-pay research wrong?
Most WTP research asks buyers what they would pay in a hypothetical scenario. Hypothetical price tolerance consistently overstates actual price tolerance by 15-40% because buyers feel no financial consequence for saying a higher number. Research methods that reveal WTP through observed behavior, deal debrief analysis, conjoint studies, van Westendorp analysis with anchored scenarios, produce materially more accurate estimates.
What's the fastest way to get useful willingness-to-pay data?
Your closed deals and lost deals are your fastest WTP dataset. Run structured deal debrief interviews with the last 20 deals you closed and the last 10 deals you lost. For closed deals, ask what other options the buyer considered and what price points they explored. For lost deals, ask what they ultimately bought and at what price. This dataset gives you more actionable WTP signal than any survey in a fraction of the time.

Find out where your commercial gaps are.

Take the Free Assessment →