Table of Content

Table of Content

Learn How to Raise Prices Without Losing Customers from the Person Who Has Done It at Adobe, Loom and Otter.ai

Learn How to Raise Prices Without Losing Customers from the Person Who Has Done It at Adobe, Loom and Otter.ai

Learn How to Raise Prices Without Losing Customers from the Person Who Has Done It at Adobe, Loom and Otter.ai

Learn How to Raise Prices Without Losing Customers from the Person Who Has Done It at Adobe, Loom and Otter.ai

Dec 9, 2025

Dec 9, 2025

Dec 9, 2025

• 12 min read

• 12 min read

• 12 min read

Aanchal Parmar

Aanchal Parmar

Product Marketing Manager, Flexprice

Product Marketing Manager, Flexprice

Product Marketing Manager, Flexprice

Blog Image
Blog Image
Blog Image
Blog Image

There's one message we dread sending more than anything; the price increase email.

But here's the truth: SaaS prices are increasing whether you announce changes or not. In 2025, SaaS costs hit $9,100 per employee, up 15% in just two years. 

At Salesforce, 72% of growth now comes from price increases, not new customers. AI inference costs, infrastructure expenses, and team salaries aren't getting cheaper.

If you're afraid of upsetting customers, you're not alone. Even leaders from companies such as Adobe, Loom, and Otter.ai face this issue daily.

So we sat down with Naveen Mohan, Otter.ai's Head of Monetization who's navigated pricing at all three companies to break down the exact framework he uses to test price changes without sparking churn.

Why 2025 Is Forcing Every SaaS Company to Revisit Pricing

You can't price-and-forget anymore. The economics have fundamentally shifted.

1. 9% Increase in SaaS Spend Per Employee

SaaS costs now hit $9,100 per employee, up from $7,900 in 2023 a 15% jump in just two years. But here's what's driving founder anxiety: customers are experiencing "SaaS fatigue" as budgets grow only 2.8% while prices surge 9-25%.

Your customers are getting squeezed. Their finance teams are auditing every subscription. And you're caught in the middle needing to raise prices while knowing they're already cutting tools.

Naveen sees this playing out in real-time: "Sometimes we’re just too cautious and keep prices static. The mentality is changing across the industry, but in so many discussions I’ve been part of, founders are still understandably nervous about making pricing changes.

Now, while you're frozen in fear analysts recorded over 300+ pricing/packaging changes in the prior year, with continued momentum into 2025, indicating that pricing changes are routine rather than exceptional

2. AI/LLM Costs Are Changing Faster Than You Think

If you've shipped AI features in the last 12 months, your cost structure is now a moving target.

Naveen's approach to this volatility: "Given how fast everything is moving, I’ve found it most useful to think in one-year time scales. You still plan long-term, of course, but for costs and actual building, a one-year outlook tends to be the most realistic."

Why one year? Because: "Older models have definitely gotten cheaper, but newer models haven’t followed the same pattern—their costs are holding fairly steady. And since teams often want the performance gains from the newer models, it’s no longer safe to assume pricing will just keep dropping. That makes long-range forecasting less predictable.”

Meaning, you can't wait for AI costs to magically decrease. GPT-4 might get cheaper, but GPT-5 won't be. You're on a treadmill and your pricing needs to account for that.

3. CFOs Are Normalizing 10–15% Annual Increases

Here's the counterintuitive good news: the taboo around price increases is fading. Companies that implemented smaller increases every 12-18 months see 15-20% higher lifetime value

Because customers have adjusted their expectations. They're not shocked by price increases anymore—they're shocked by sticker shock. Three years of frozen pricing that suddenly jumps 40% feels like betrayal contrary to that a steady 10% annual adjustment feels like inflation.

Think about it: your product six months ago isn't your product today. Your customer base in Q1 isn't your customer base in Q4. Why would your pricing stay frozen?

4. Competitors Are Bundling AI Into Plans and Raising Prices

Remember when your AI feature was going to be the differentiator? That window closed fast.

In 2024, 339 pricing changes were tracked among top SaaS companies—and a significant portion involved bundling AI into core plans while raising base prices 15-30%. What was supposed to be a premium capability is rapidly becoming table stakes.

This creates an impossible trade-off. Keep AI as a separate add-on, and customers perceive it as overpriced, adoption delays. Bundle it into your base plan, and you're absorbing inference costs without corresponding revenue growth.

Now that’s quite a bit of a situation isn’t it?

Naveen has watched this play out in real time. "With AI unlocking so much, teams are shipping fast. You’re seeing feature announcements every week, sometimes every day from both big companies and smaller teams," he explains. The result is that "In that world, discovery becomes the bottleneck."

It's not a feature problem—it's an attention problem. When every company ships AI capabilities weekly, no single feature breaks through the noise.

His longer-term view is even more sobering: "I don’t find it surprising. This is the direction things naturally go. Over time, AI stops being a standalone ‘thing’ and becomes embedded into every workflow."

Think about what happened with messaging. It used to be a standalone product, then a feature and now it’s infrastructure which is embedded in everything from Slack to Figma to your banking app. AI is following the same path, just faster.

For pricing, this means the premium window is shorter than you think. The AI features you're charging extra for today will be expected for free within 18 months. "Bundling, I think, is a good strategy," Naveen notes, "but then I don't think that is the only way that this gets solved."

The real issue, he argues, is deeper: "There needs to be a reset around how features are discovered across the different products." Bundling is a band-aid. The underlying problem is that you're shipping faster than customers can absorb.

If you're still treating AI as a separate SKU, a premium tier, a special add-on—you're fighting yesterday's battle.

The Bottom Line

The economic pressure isn't coming from one direction. It's coming from all sides:

  • Customer budgets growing 2.8% while your costs surge 9%+ SaaStr

  • AI inference costs that spike unpredictably with usage

  • A market where 72% of Salesforce's growth comes from price increases, not new customers Product Hunt

  • Competitors normalizing double-digit annual increases

You can't wait this out. The question isn't whether you'll need to revisit pricing in 2025, it's whether you'll do it scientifically, or cross your fingers and hope.

Naveen's framework, built across three companies and hundreds of pricing experiments, offers a third option: test systematically, measure relentlessly, and adjust continuously.

What Founders Get Wrong About Price Increases

Naveen has seen the same mistakes repeated across three very different companies from Adobe's enterprise scale to Loom's viral PLG motion to Otter.ai's AI-first evolution. The patterns are consistent, regardless of company size or business model.

1. Waiting Too Long

One of the most common mistakes that founders make is decision paralysis.

"I've been a part of many discussions where founders and others have been understandably very cautious about changing prices and it takes a long time to even consider experimenting on prices," Naveen explains. He's seen this play out identically at companies with millions of users and startups with thousands.

The result is predictable: companies let pricing drift further and further from reality. Features pile up, costs increase and competitors raise prices. But internally, the conversation remains the same: "Maybe next quarter."

But quite recently the pattern seems to be changing and we’re seeing more teams understanding that the bigger risk isn’t upsetting customers. But it’s subsidizing your product into irrelevance while competitors capture the value you’re creating.

2. Treating Pricing as a One-Time Decision Instead of an Ongoing Muscle

Most founders set a price at launch, then spend years avoiding the conversation. Naveen Mohan thinks that's backwards.

"I always love to think about pricing as a powerful lever and also something you can experiment with frequently"

says Mohan, whose data science background shapes how he approaches monetization. He pauses on that last word frequently because it's the part most founders miss.

The logic, he argues, is obvious once you say it out loud: "You are not selling to a static audience, you’re selling to a dynamic crowd and your audience is shifting so also it's good to keep evolving and testing out different pricing with time."

At Otter.ai, that philosophy translates into a rhythm. "Usually at least a quarterly check-in to six months could be a good place to start," he explains. 

"And also more frequently when it makes sense, because if you're shipping a lot in those three months or in that half, or if you've hardly shipped anything, but you could also be acquiring a different type of users or going after a different strategy."

Your Q1 customer isn't your Q4 customer and our product in January isn't your product in June. Your competitive landscape shifts weekly. Yet most companies treat pricing like it's chiseled in stone changing it only when absolutely forced to.

3. Assuming Users Will React Worse Than They Actually Do

Naveen had a confession: he's consistently wrong about one thing."If you have passionate users who have been getting a lot of value," he says, "they're more open to price increases than I've anticipated in the past at least."

This isn't coming from someone running their first pricing test. Naveen has shipped pricing changes at Adobe, Loom, and Otter.ai companies with millions of users across wildly different business models. And even with that experience, he still underestimates how willing engaged customers are to pay more.

The pattern holds across all three companies: users who've embedded your product into their daily workflow understand value differently. They're not hunting for reasons to cancel. 

They're measuring ROI and comparing the hours saved against the dollars spent. Often, they're wondering why you haven't charged them more already.

But this is where most founders miss the nuance this only applies to engaged users. The person who signed up six months ago and logs in twice a year, that’s a different story. The segmentation isn't optional.

4. Not Testing Anything Before Announcing Changes

Here's where pricing strategies usually collapse.

Most companies treat price changes like product launches: build it, announce it, hope for the best. Naveen's approach is the opposite.

"Usually, I've seen it work better, especially if you're testing and trying to form or get to a decision on what's the right price with newer customers," he explains. "It's better to test with newer customers and then once you are more certain then that's the price you want to at least settle on for a while before going back and changing for old customers."

The reasoning is simple: new customers have no baseline. They're not comparing your new price to what they used to pay, they're comparing it to your competitors. 

They're evaluating your product at market rates, without the emotional weight of "this used to cost less."

So think of them as your test ground which you use to collect data and see what converts. Then decide if you want to migrate existing customers.

Instead, most companies do exactly the opposite. They pick a new price, announce it company-wide, watch the reaction in real-time, and either commit or panic. No experiments. 

Did you also think about Cursor?

"Again, it always helps to understand the benefit and build assumptions around how much of it might drive churn and whether it’s still revenue-positive if you roll it out to existing customers. That exercise is worth doing before you jump in and make the change."

Naveen's entire framework—quarterly reviews, cohort testing, segment-specific rollouts—rests on a premise that sounds obvious but most companies ignore: pricing is too critical to guess at, and too fluid to set once and forget.

There's one message we dread sending more than anything; the price increase email.

But here's the truth: SaaS prices are increasing whether you announce changes or not. In 2025, SaaS costs hit $9,100 per employee, up 15% in just two years. 

At Salesforce, 72% of growth now comes from price increases, not new customers. AI inference costs, infrastructure expenses, and team salaries aren't getting cheaper.

If you're afraid of upsetting customers, you're not alone. Even leaders from companies such as Adobe, Loom, and Otter.ai face this issue daily.

So we sat down with Naveen Mohan, Otter.ai's Head of Monetization who's navigated pricing at all three companies to break down the exact framework he uses to test price changes without sparking churn.

Why 2025 Is Forcing Every SaaS Company to Revisit Pricing

You can't price-and-forget anymore. The economics have fundamentally shifted.

1. 9% Increase in SaaS Spend Per Employee

SaaS costs now hit $9,100 per employee, up from $7,900 in 2023 a 15% jump in just two years. But here's what's driving founder anxiety: customers are experiencing "SaaS fatigue" as budgets grow only 2.8% while prices surge 9-25%.

Your customers are getting squeezed. Their finance teams are auditing every subscription. And you're caught in the middle needing to raise prices while knowing they're already cutting tools.

Naveen sees this playing out in real-time: "Sometimes we’re just too cautious and keep prices static. The mentality is changing across the industry, but in so many discussions I’ve been part of, founders are still understandably nervous about making pricing changes.

Now, while you're frozen in fear analysts recorded over 300+ pricing/packaging changes in the prior year, with continued momentum into 2025, indicating that pricing changes are routine rather than exceptional

2. AI/LLM Costs Are Changing Faster Than You Think

If you've shipped AI features in the last 12 months, your cost structure is now a moving target.

Naveen's approach to this volatility: "Given how fast everything is moving, I’ve found it most useful to think in one-year time scales. You still plan long-term, of course, but for costs and actual building, a one-year outlook tends to be the most realistic."

Why one year? Because: "Older models have definitely gotten cheaper, but newer models haven’t followed the same pattern—their costs are holding fairly steady. And since teams often want the performance gains from the newer models, it’s no longer safe to assume pricing will just keep dropping. That makes long-range forecasting less predictable.”

Meaning, you can't wait for AI costs to magically decrease. GPT-4 might get cheaper, but GPT-5 won't be. You're on a treadmill and your pricing needs to account for that.

3. CFOs Are Normalizing 10–15% Annual Increases

Here's the counterintuitive good news: the taboo around price increases is fading. Companies that implemented smaller increases every 12-18 months see 15-20% higher lifetime value

Because customers have adjusted their expectations. They're not shocked by price increases anymore—they're shocked by sticker shock. Three years of frozen pricing that suddenly jumps 40% feels like betrayal contrary to that a steady 10% annual adjustment feels like inflation.

Think about it: your product six months ago isn't your product today. Your customer base in Q1 isn't your customer base in Q4. Why would your pricing stay frozen?

4. Competitors Are Bundling AI Into Plans and Raising Prices

Remember when your AI feature was going to be the differentiator? That window closed fast.

In 2024, 339 pricing changes were tracked among top SaaS companies—and a significant portion involved bundling AI into core plans while raising base prices 15-30%. What was supposed to be a premium capability is rapidly becoming table stakes.

This creates an impossible trade-off. Keep AI as a separate add-on, and customers perceive it as overpriced, adoption delays. Bundle it into your base plan, and you're absorbing inference costs without corresponding revenue growth.

Now that’s quite a bit of a situation isn’t it?

Naveen has watched this play out in real time. "With AI unlocking so much, teams are shipping fast. You’re seeing feature announcements every week, sometimes every day from both big companies and smaller teams," he explains. The result is that "In that world, discovery becomes the bottleneck."

It's not a feature problem—it's an attention problem. When every company ships AI capabilities weekly, no single feature breaks through the noise.

His longer-term view is even more sobering: "I don’t find it surprising. This is the direction things naturally go. Over time, AI stops being a standalone ‘thing’ and becomes embedded into every workflow."

Think about what happened with messaging. It used to be a standalone product, then a feature and now it’s infrastructure which is embedded in everything from Slack to Figma to your banking app. AI is following the same path, just faster.

For pricing, this means the premium window is shorter than you think. The AI features you're charging extra for today will be expected for free within 18 months. "Bundling, I think, is a good strategy," Naveen notes, "but then I don't think that is the only way that this gets solved."

The real issue, he argues, is deeper: "There needs to be a reset around how features are discovered across the different products." Bundling is a band-aid. The underlying problem is that you're shipping faster than customers can absorb.

If you're still treating AI as a separate SKU, a premium tier, a special add-on—you're fighting yesterday's battle.

The Bottom Line

The economic pressure isn't coming from one direction. It's coming from all sides:

  • Customer budgets growing 2.8% while your costs surge 9%+ SaaStr

  • AI inference costs that spike unpredictably with usage

  • A market where 72% of Salesforce's growth comes from price increases, not new customers Product Hunt

  • Competitors normalizing double-digit annual increases

You can't wait this out. The question isn't whether you'll need to revisit pricing in 2025, it's whether you'll do it scientifically, or cross your fingers and hope.

Naveen's framework, built across three companies and hundreds of pricing experiments, offers a third option: test systematically, measure relentlessly, and adjust continuously.

What Founders Get Wrong About Price Increases

Naveen has seen the same mistakes repeated across three very different companies from Adobe's enterprise scale to Loom's viral PLG motion to Otter.ai's AI-first evolution. The patterns are consistent, regardless of company size or business model.

1. Waiting Too Long

One of the most common mistakes that founders make is decision paralysis.

"I've been a part of many discussions where founders and others have been understandably very cautious about changing prices and it takes a long time to even consider experimenting on prices," Naveen explains. He's seen this play out identically at companies with millions of users and startups with thousands.

The result is predictable: companies let pricing drift further and further from reality. Features pile up, costs increase and competitors raise prices. But internally, the conversation remains the same: "Maybe next quarter."

But quite recently the pattern seems to be changing and we’re seeing more teams understanding that the bigger risk isn’t upsetting customers. But it’s subsidizing your product into irrelevance while competitors capture the value you’re creating.

2. Treating Pricing as a One-Time Decision Instead of an Ongoing Muscle

Most founders set a price at launch, then spend years avoiding the conversation. Naveen Mohan thinks that's backwards.

"I always love to think about pricing as a powerful lever and also something you can experiment with frequently"

says Mohan, whose data science background shapes how he approaches monetization. He pauses on that last word frequently because it's the part most founders miss.

The logic, he argues, is obvious once you say it out loud: "You are not selling to a static audience, you’re selling to a dynamic crowd and your audience is shifting so also it's good to keep evolving and testing out different pricing with time."

At Otter.ai, that philosophy translates into a rhythm. "Usually at least a quarterly check-in to six months could be a good place to start," he explains. 

"And also more frequently when it makes sense, because if you're shipping a lot in those three months or in that half, or if you've hardly shipped anything, but you could also be acquiring a different type of users or going after a different strategy."

Your Q1 customer isn't your Q4 customer and our product in January isn't your product in June. Your competitive landscape shifts weekly. Yet most companies treat pricing like it's chiseled in stone changing it only when absolutely forced to.

3. Assuming Users Will React Worse Than They Actually Do

Naveen had a confession: he's consistently wrong about one thing."If you have passionate users who have been getting a lot of value," he says, "they're more open to price increases than I've anticipated in the past at least."

This isn't coming from someone running their first pricing test. Naveen has shipped pricing changes at Adobe, Loom, and Otter.ai companies with millions of users across wildly different business models. And even with that experience, he still underestimates how willing engaged customers are to pay more.

The pattern holds across all three companies: users who've embedded your product into their daily workflow understand value differently. They're not hunting for reasons to cancel. 

They're measuring ROI and comparing the hours saved against the dollars spent. Often, they're wondering why you haven't charged them more already.

But this is where most founders miss the nuance this only applies to engaged users. The person who signed up six months ago and logs in twice a year, that’s a different story. The segmentation isn't optional.

4. Not Testing Anything Before Announcing Changes

Here's where pricing strategies usually collapse.

Most companies treat price changes like product launches: build it, announce it, hope for the best. Naveen's approach is the opposite.

"Usually, I've seen it work better, especially if you're testing and trying to form or get to a decision on what's the right price with newer customers," he explains. "It's better to test with newer customers and then once you are more certain then that's the price you want to at least settle on for a while before going back and changing for old customers."

The reasoning is simple: new customers have no baseline. They're not comparing your new price to what they used to pay, they're comparing it to your competitors. 

They're evaluating your product at market rates, without the emotional weight of "this used to cost less."

So think of them as your test ground which you use to collect data and see what converts. Then decide if you want to migrate existing customers.

Instead, most companies do exactly the opposite. They pick a new price, announce it company-wide, watch the reaction in real-time, and either commit or panic. No experiments. 

Did you also think about Cursor?

"Again, it always helps to understand the benefit and build assumptions around how much of it might drive churn and whether it’s still revenue-positive if you roll it out to existing customers. That exercise is worth doing before you jump in and make the change."

Naveen's entire framework—quarterly reviews, cohort testing, segment-specific rollouts—rests on a premise that sounds obvious but most companies ignore: pricing is too critical to guess at, and too fluid to set once and forget.

There's one message we dread sending more than anything; the price increase email.

But here's the truth: SaaS prices are increasing whether you announce changes or not. In 2025, SaaS costs hit $9,100 per employee, up 15% in just two years. 

At Salesforce, 72% of growth now comes from price increases, not new customers. AI inference costs, infrastructure expenses, and team salaries aren't getting cheaper.

If you're afraid of upsetting customers, you're not alone. Even leaders from companies such as Adobe, Loom, and Otter.ai face this issue daily.

So we sat down with Naveen Mohan, Otter.ai's Head of Monetization who's navigated pricing at all three companies to break down the exact framework he uses to test price changes without sparking churn.

Why 2025 Is Forcing Every SaaS Company to Revisit Pricing

You can't price-and-forget anymore. The economics have fundamentally shifted.

1. 9% Increase in SaaS Spend Per Employee

SaaS costs now hit $9,100 per employee, up from $7,900 in 2023 a 15% jump in just two years. But here's what's driving founder anxiety: customers are experiencing "SaaS fatigue" as budgets grow only 2.8% while prices surge 9-25%.

Your customers are getting squeezed. Their finance teams are auditing every subscription. And you're caught in the middle needing to raise prices while knowing they're already cutting tools.

Naveen sees this playing out in real-time: "Sometimes we’re just too cautious and keep prices static. The mentality is changing across the industry, but in so many discussions I’ve been part of, founders are still understandably nervous about making pricing changes.

Now, while you're frozen in fear analysts recorded over 300+ pricing/packaging changes in the prior year, with continued momentum into 2025, indicating that pricing changes are routine rather than exceptional

2. AI/LLM Costs Are Changing Faster Than You Think

If you've shipped AI features in the last 12 months, your cost structure is now a moving target.

Naveen's approach to this volatility: "Given how fast everything is moving, I’ve found it most useful to think in one-year time scales. You still plan long-term, of course, but for costs and actual building, a one-year outlook tends to be the most realistic."

Why one year? Because: "Older models have definitely gotten cheaper, but newer models haven’t followed the same pattern—their costs are holding fairly steady. And since teams often want the performance gains from the newer models, it’s no longer safe to assume pricing will just keep dropping. That makes long-range forecasting less predictable.”

Meaning, you can't wait for AI costs to magically decrease. GPT-4 might get cheaper, but GPT-5 won't be. You're on a treadmill and your pricing needs to account for that.

3. CFOs Are Normalizing 10–15% Annual Increases

Here's the counterintuitive good news: the taboo around price increases is fading. Companies that implemented smaller increases every 12-18 months see 15-20% higher lifetime value

Because customers have adjusted their expectations. They're not shocked by price increases anymore—they're shocked by sticker shock. Three years of frozen pricing that suddenly jumps 40% feels like betrayal contrary to that a steady 10% annual adjustment feels like inflation.

Think about it: your product six months ago isn't your product today. Your customer base in Q1 isn't your customer base in Q4. Why would your pricing stay frozen?

4. Competitors Are Bundling AI Into Plans and Raising Prices

Remember when your AI feature was going to be the differentiator? That window closed fast.

In 2024, 339 pricing changes were tracked among top SaaS companies—and a significant portion involved bundling AI into core plans while raising base prices 15-30%. What was supposed to be a premium capability is rapidly becoming table stakes.

This creates an impossible trade-off. Keep AI as a separate add-on, and customers perceive it as overpriced, adoption delays. Bundle it into your base plan, and you're absorbing inference costs without corresponding revenue growth.

Now that’s quite a bit of a situation isn’t it?

Naveen has watched this play out in real time. "With AI unlocking so much, teams are shipping fast. You’re seeing feature announcements every week, sometimes every day from both big companies and smaller teams," he explains. The result is that "In that world, discovery becomes the bottleneck."

It's not a feature problem—it's an attention problem. When every company ships AI capabilities weekly, no single feature breaks through the noise.

His longer-term view is even more sobering: "I don’t find it surprising. This is the direction things naturally go. Over time, AI stops being a standalone ‘thing’ and becomes embedded into every workflow."

Think about what happened with messaging. It used to be a standalone product, then a feature and now it’s infrastructure which is embedded in everything from Slack to Figma to your banking app. AI is following the same path, just faster.

For pricing, this means the premium window is shorter than you think. The AI features you're charging extra for today will be expected for free within 18 months. "Bundling, I think, is a good strategy," Naveen notes, "but then I don't think that is the only way that this gets solved."

The real issue, he argues, is deeper: "There needs to be a reset around how features are discovered across the different products." Bundling is a band-aid. The underlying problem is that you're shipping faster than customers can absorb.

If you're still treating AI as a separate SKU, a premium tier, a special add-on—you're fighting yesterday's battle.

The Bottom Line

The economic pressure isn't coming from one direction. It's coming from all sides:

  • Customer budgets growing 2.8% while your costs surge 9%+ SaaStr

  • AI inference costs that spike unpredictably with usage

  • A market where 72% of Salesforce's growth comes from price increases, not new customers Product Hunt

  • Competitors normalizing double-digit annual increases

You can't wait this out. The question isn't whether you'll need to revisit pricing in 2025, it's whether you'll do it scientifically, or cross your fingers and hope.

Naveen's framework, built across three companies and hundreds of pricing experiments, offers a third option: test systematically, measure relentlessly, and adjust continuously.

What Founders Get Wrong About Price Increases

Naveen has seen the same mistakes repeated across three very different companies from Adobe's enterprise scale to Loom's viral PLG motion to Otter.ai's AI-first evolution. The patterns are consistent, regardless of company size or business model.

1. Waiting Too Long

One of the most common mistakes that founders make is decision paralysis.

"I've been a part of many discussions where founders and others have been understandably very cautious about changing prices and it takes a long time to even consider experimenting on prices," Naveen explains. He's seen this play out identically at companies with millions of users and startups with thousands.

The result is predictable: companies let pricing drift further and further from reality. Features pile up, costs increase and competitors raise prices. But internally, the conversation remains the same: "Maybe next quarter."

But quite recently the pattern seems to be changing and we’re seeing more teams understanding that the bigger risk isn’t upsetting customers. But it’s subsidizing your product into irrelevance while competitors capture the value you’re creating.

2. Treating Pricing as a One-Time Decision Instead of an Ongoing Muscle

Most founders set a price at launch, then spend years avoiding the conversation. Naveen Mohan thinks that's backwards.

"I always love to think about pricing as a powerful lever and also something you can experiment with frequently"

says Mohan, whose data science background shapes how he approaches monetization. He pauses on that last word frequently because it's the part most founders miss.

The logic, he argues, is obvious once you say it out loud: "You are not selling to a static audience, you’re selling to a dynamic crowd and your audience is shifting so also it's good to keep evolving and testing out different pricing with time."

At Otter.ai, that philosophy translates into a rhythm. "Usually at least a quarterly check-in to six months could be a good place to start," he explains. 

"And also more frequently when it makes sense, because if you're shipping a lot in those three months or in that half, or if you've hardly shipped anything, but you could also be acquiring a different type of users or going after a different strategy."

Your Q1 customer isn't your Q4 customer and our product in January isn't your product in June. Your competitive landscape shifts weekly. Yet most companies treat pricing like it's chiseled in stone changing it only when absolutely forced to.

3. Assuming Users Will React Worse Than They Actually Do

Naveen had a confession: he's consistently wrong about one thing."If you have passionate users who have been getting a lot of value," he says, "they're more open to price increases than I've anticipated in the past at least."

This isn't coming from someone running their first pricing test. Naveen has shipped pricing changes at Adobe, Loom, and Otter.ai companies with millions of users across wildly different business models. And even with that experience, he still underestimates how willing engaged customers are to pay more.

The pattern holds across all three companies: users who've embedded your product into their daily workflow understand value differently. They're not hunting for reasons to cancel. 

They're measuring ROI and comparing the hours saved against the dollars spent. Often, they're wondering why you haven't charged them more already.

But this is where most founders miss the nuance this only applies to engaged users. The person who signed up six months ago and logs in twice a year, that’s a different story. The segmentation isn't optional.

4. Not Testing Anything Before Announcing Changes

Here's where pricing strategies usually collapse.

Most companies treat price changes like product launches: build it, announce it, hope for the best. Naveen's approach is the opposite.

"Usually, I've seen it work better, especially if you're testing and trying to form or get to a decision on what's the right price with newer customers," he explains. "It's better to test with newer customers and then once you are more certain then that's the price you want to at least settle on for a while before going back and changing for old customers."

The reasoning is simple: new customers have no baseline. They're not comparing your new price to what they used to pay, they're comparing it to your competitors. 

They're evaluating your product at market rates, without the emotional weight of "this used to cost less."

So think of them as your test ground which you use to collect data and see what converts. Then decide if you want to migrate existing customers.

Instead, most companies do exactly the opposite. They pick a new price, announce it company-wide, watch the reaction in real-time, and either commit or panic. No experiments. 

Did you also think about Cursor?

"Again, it always helps to understand the benefit and build assumptions around how much of it might drive churn and whether it’s still revenue-positive if you roll it out to existing customers. That exercise is worth doing before you jump in and make the change."

Naveen's entire framework—quarterly reviews, cohort testing, segment-specific rollouts—rests on a premise that sounds obvious but most companies ignore: pricing is too critical to guess at, and too fluid to set once and forget.

Check Out - The AI Pricing Podcast

Check Out - The AI Pricing Podcast

Check Out - The AI Pricing Podcast

The Framework Otter.ai Uses to Test Price Increases (Step-by-Step)

Most pricing changes fail at the planning stage. Generally, teams pick a number, announce it broadly and wait to see if and when something breaks. Naveen’s approach inverts this—he says you narrow the test, shorten the feedback loop, and let data make the call.

Let’s understand his playbook at Otter.ai in detail:

Step 01: Pick The Right Segment

The universal instinct while launching a pricing experiment is to A/B test across your entire user base, please resist it.

"You can also choose to experiment on some segments which are closer to churning or the subscription ending, which can get you results slightly quicker," Naveen explains. He targets users 30-60 days from renewal, people whose decisions are imminent.

With this approach you don’t have to wait for 12 months for an annual cohort to renew and monthly subscribers give you signals in weeks.

"Even if you roll out an experiment to monthly and annual customers you should be looking at the monthly cohorts," he adds. The annual customers will take a year to show you anything useful. Monthly customers show you in 30 days.

The second filter: engagement. "Start with customers who've already shown engagement." High-usage users react differently than people who signed up once and disappeared. Segment accordingly.

Step 02: Define The Variant Price Clearly

Once you’ve chosen your customer segment, pick one variable to test and not more than that. 

“You might make a price change and see an immediate impact, sometimes it improves conversion, but more often it reduces it,” Naveen notes. “You only get a real cause-and-effect signal when you change one thing at a time.”

Most teams bundle changes: new price and new features and revised limits. Then when conversion drops, they can't diagnose why. Was it the price? The packaging? The messaging?

The safe range for initial tests is a 10-20% increase. Large enough to matter financially, small enough that engaged users won't flinch. Go beyond 25% and you're testing price sensitivity, not price optimization.

Keep everything else constant, be it features, limits, value prop.

Step 03: Roll Out to A Small Percentage and Track Signals 

Start with 5-10% of your target segment. Then watch what happens in the first two weeks.

“You actually get signals from early cancellations. People don’t wait until day 29 or day 30 of the subscription, they cancel all through the month,” Naveen explains. Most founders assume users only decide at renewal. They don’t.

"For example, when you roll out a pricing change, you can look at month-one retention and even within the first two weeks, you already have a rolling cohort of users who’ve initiated cancellations."

Track four signals:

  • Early cancellations (days 3-14)

  • Deflection attempt success rate

  • Plan downgrades instead of outright cancels

  • Support ticket objections mentioning price

Step 4: Compare Apples to Apples

The most common mistake in pricing experiments: comparing users who aren't comparable.

Naveen's rule: "Making sure you're comparing apples to apples becomes important and looking at a similar age of cohorts." 

A user in month one behaves differently than a user in month six, regardless of price.

If your test cohort is mostly new signups and your control group is seasoned customers, the data will lie to you. 

Match them by lifecycle stage: compare 30-day-old users against other 30-day-old users. Compare annual renewals against annual renewals.

Normalize across subscription lengths. Monthly and annual customers churn at different rates naturally. Don't mix them in your analysis.

Step 5: Expand Gradually (Don't "Go Big" Too Early)

Early success doesn't mean you're done. It means you can expand—slowly.

Move from 5% to 15%. Wait. Measure again. Then 50%. 

Then full rollout. Each stage might reveal something the previous one missed. A cohort that converts well at 5% scale might behave differently when you hit critical mass and word spreads.

"You may make a price change, you might see the immediate impact be positive but if you factor in net revenue after three months or six months, which usually there's a bit of expansion, you might find that this is negative net negativity overall," Naveen warns.

The 30-60 day checkpoint matters. Users who converted at a higher price are expanding to higher tiers? Or quietly downgrading? Immediate conversion tells you one story. Post-conversion behavior tells you whether the price is sustainable.

How Flexprice supports it: Gradual rollout controls let you dial exposure up incrementally without redeploying code. Expansion tracking shows whether higher-priced customers grow or contract after conversion. You see the full revenue picture, not just the initial decision.

Step 6: When to Call the Experiment

Most teams make the call too early. Naveen's learned to wait.

"Early data, usually with monetization, there's a huge delay in actually understanding the impact of the entire ecosystem," he explains. "So having constant check-ins maybe one month after you release or make a decision or say for example pricing or a big feature or a churn reduction feature."

He gives an example: "Say you decide to give very aggressive promos as a part of your churn reduction or deflection flow and early data might be amazing. You see like way less number of people canceling, but what happens once the promo expires? Do they still retain or is it back to the same retention rate?"

The decision criteria after 4-6 weeks:

  • Conversion drop under 10%

  • Net revenue positive

  • No spike in cancellations or deflection attempts

  • Expansion behavior stable or improving

All four need to hold. Three out of four isn't enough, you're optimizing for short-term revenue at the cost of lifetime value.

Three Pricing Experiments to Run Before You Announce Increases

Before you email your entire customer base, test these three experiments on small cohorts. Each reveals something different about how your market will react.

Experiment 1: Price Increase on Monthly Plans Only

Start here. Monthly subscribers give you a signal in 30 days instead of 12 months.

Pick a 15% increase, roll it to 10% of new monthly signups. Track conversion and 30-day retention. If it holds, expand to 25%, then 50%.

Why monthly first? "Even if you roll out an experiment to monthly and annual customers you should be looking at the monthly cohorts," Naveen explains. 

Annual contracts lock you into long feedback loops. Monthly plans show you immediately whether the price kills conversion or customers absorb it without flinching.

The risk is contained. If it fails, you've tested on a fraction of your lowest-commitment users. If it works, you have data to bring to the annual pricing conversation.

Experiment 2: Feature-Gated Upsell Variant

Instead of raising your base price, gate a high-usage feature behind a higher tier.

Identify which feature your power users hit most often, maybe advanced exports, API access, or priority support. 

Move it from your Standard plan to Professional. Watch what happens.

This tests value perception, not just price sensitivity. If users immediately upgrade, you've confirmed the feature justifies higher pricing. If they don't, you've learned the feature isn't as valuable as you thought—without alienating your base with a price hike.

"Again, the consumers need to be able to understand and then understand their usage over time and then see what tier is good for them," Naveen notes. Feature-gating forces that evaluation.

Experiment 3: Hybrid Pricing Pilot (Base + Usage Credits)

Move 5% of new customers into a hybrid model: base subscription plus usage credits.

“I’ve seen a lot of companies shift toward a hybrid model with a committed base price each week, month, or year, and then credits layered on top.” Naveen observes.

Why hybrid works: "Predictability is the key in going for hybrid models." Customers want flexibility without surprise bills. Companies want baseline revenue without usage anxiety.

Test it small. Does it increase average revenue per user? Do customers engage more because they're not rationing usage? Or do they hate the complexity?

"It's still 50-50 in my mind when it comes to bigger teams and bigger organizations adopting given the lack of visibility or the lack of being able to predict how much the prices are going to change or vary from month to month," he adds. The only way to know if your market prefers hybrid: test it.

What Flexprice Unlocks That Makes Price Testing Safe

Running pricing experiments isn’t just a strategy problem, it’s an infrastructure problem. Most billing systems simply aren’t designed for iterative pricing work. They hard-code plans, bake logic into product code, and force teams to rebuild entire billing flows whenever they want to test even a small change.

Flexprice approaches billing differently. It gives product, growth, and finance teams the building blocks they need to run controlled experiments without breaking their billing system or rebuilding it.

Here’s what Flexprice enables founders to do safely and repeatedly:

1. Versioned Pricing and Plan Overrides

Pricing experiments require multiple variants, different price points, bundles, trial lengths, entitlements, or credit allocations. Flexprice supports this through:

  • Plan versioning

  • Custom plan creation

  • Per-customer overrides

This means you can introduce a new pricing variant, assign it to a specific customer group, and continue running your existing plans in parallel. If the experiment fails, reverting users back to their previous plan is a configuration change, not an engineering rebuild.

Flexprice provides the flexibility to iterate quickly while keeping billing logic clean and auditable.

2. Usage and Event Data for Cohort-Level Experiment Tracking

Accurate pricing experiments depend on comparing how different cohorts behave, especially when testing price increases. Flexprice’s real-time metering and event tracking give teams the granular data needed to observe:

  • early cancellations

  • downgrade patterns

  • usage drops or spikes

  • changes in credit consumption

  • renewal behavior

Because the system collects every usage event and ties it back to the subscriber, it gives teams the foundation to evaluate experiments across lifecycle stages. You’re not guessing; you’re reading actual customer behavior from the billing layer.

Flexprice doesn’t abstract away this visibility, it exposes it as raw, trustworthy data.

3. Granular, Controlled Rollouts Through Plan Assignment

Testing pricing safely requires limiting exposure. You can’t roll out a new price to your entire user base; you need segmentation.

Flexprice enables this by allowing teams to:

  • assign different plan versions to different customers,

  • create smaller test plans for specific cohorts (e.g., monthly users),

  • run overrides without touching the main pricing structure.

If you want to test a price increase on only a subset of monthly subscribers or trial users, you can do that by assigning them to an alternate plan, without changing anything for the rest of your customers.

This is the core of safe experimentation: contain impact, observe behavior, scale only when confident.

Why This Matters

Naveen’s framework only works if your billing system lets you:

  • introduce new price variants without engineering

  • limit those variants to specific cohorts

  • read usage and cancellation patterns within days

  • revert or adjust quickly

Most billing systems break at step one.

Flexprice is built to make continuous pricing experimentation — the kind Naveen described — operationally safe and technically feasible.

Pricing Isn't Something to Fear, It's Something to Test

The anxiety around price increases is real. The fear of angry customers, mass cancellations, and revenue collapse keeps founders frozen for years.

You don't need perfect answers before you start. You need the right process: narrow segments, short feedback loops, matched cohorts, and the discipline to wait for real signals instead of reacting to day-one panic.

The difference between companies that navigate this well and those that stumble? Infrastructure. The ability to test without engineering sprints. To segment without complex SQL queries. To roll back instantly if something breaks.

Flexprice is built for founders who want to run pricing experiments quickly and safely. Define custom pricing models, launch per-customer overrides, and adapt pricing over time—all without touching your core billing code. If you want to run the exact framework Naveen uses at Otter.ai, we built the infrastructure for you.

The Framework Otter.ai Uses to Test Price Increases (Step-by-Step)

Most pricing changes fail at the planning stage. Generally, teams pick a number, announce it broadly and wait to see if and when something breaks. Naveen’s approach inverts this—he says you narrow the test, shorten the feedback loop, and let data make the call.

Let’s understand his playbook at Otter.ai in detail:

Step 01: Pick The Right Segment

The universal instinct while launching a pricing experiment is to A/B test across your entire user base, please resist it.

"You can also choose to experiment on some segments which are closer to churning or the subscription ending, which can get you results slightly quicker," Naveen explains. He targets users 30-60 days from renewal, people whose decisions are imminent.

With this approach you don’t have to wait for 12 months for an annual cohort to renew and monthly subscribers give you signals in weeks.

"Even if you roll out an experiment to monthly and annual customers you should be looking at the monthly cohorts," he adds. The annual customers will take a year to show you anything useful. Monthly customers show you in 30 days.

The second filter: engagement. "Start with customers who've already shown engagement." High-usage users react differently than people who signed up once and disappeared. Segment accordingly.

Step 02: Define The Variant Price Clearly

Once you’ve chosen your customer segment, pick one variable to test and not more than that. 

“You might make a price change and see an immediate impact, sometimes it improves conversion, but more often it reduces it,” Naveen notes. “You only get a real cause-and-effect signal when you change one thing at a time.”

Most teams bundle changes: new price and new features and revised limits. Then when conversion drops, they can't diagnose why. Was it the price? The packaging? The messaging?

The safe range for initial tests is a 10-20% increase. Large enough to matter financially, small enough that engaged users won't flinch. Go beyond 25% and you're testing price sensitivity, not price optimization.

Keep everything else constant, be it features, limits, value prop.

Step 03: Roll Out to A Small Percentage and Track Signals 

Start with 5-10% of your target segment. Then watch what happens in the first two weeks.

“You actually get signals from early cancellations. People don’t wait until day 29 or day 30 of the subscription, they cancel all through the month,” Naveen explains. Most founders assume users only decide at renewal. They don’t.

"For example, when you roll out a pricing change, you can look at month-one retention and even within the first two weeks, you already have a rolling cohort of users who’ve initiated cancellations."

Track four signals:

  • Early cancellations (days 3-14)

  • Deflection attempt success rate

  • Plan downgrades instead of outright cancels

  • Support ticket objections mentioning price

Step 4: Compare Apples to Apples

The most common mistake in pricing experiments: comparing users who aren't comparable.

Naveen's rule: "Making sure you're comparing apples to apples becomes important and looking at a similar age of cohorts." 

A user in month one behaves differently than a user in month six, regardless of price.

If your test cohort is mostly new signups and your control group is seasoned customers, the data will lie to you. 

Match them by lifecycle stage: compare 30-day-old users against other 30-day-old users. Compare annual renewals against annual renewals.

Normalize across subscription lengths. Monthly and annual customers churn at different rates naturally. Don't mix them in your analysis.

Step 5: Expand Gradually (Don't "Go Big" Too Early)

Early success doesn't mean you're done. It means you can expand—slowly.

Move from 5% to 15%. Wait. Measure again. Then 50%. 

Then full rollout. Each stage might reveal something the previous one missed. A cohort that converts well at 5% scale might behave differently when you hit critical mass and word spreads.

"You may make a price change, you might see the immediate impact be positive but if you factor in net revenue after three months or six months, which usually there's a bit of expansion, you might find that this is negative net negativity overall," Naveen warns.

The 30-60 day checkpoint matters. Users who converted at a higher price are expanding to higher tiers? Or quietly downgrading? Immediate conversion tells you one story. Post-conversion behavior tells you whether the price is sustainable.

How Flexprice supports it: Gradual rollout controls let you dial exposure up incrementally without redeploying code. Expansion tracking shows whether higher-priced customers grow or contract after conversion. You see the full revenue picture, not just the initial decision.

Step 6: When to Call the Experiment

Most teams make the call too early. Naveen's learned to wait.

"Early data, usually with monetization, there's a huge delay in actually understanding the impact of the entire ecosystem," he explains. "So having constant check-ins maybe one month after you release or make a decision or say for example pricing or a big feature or a churn reduction feature."

He gives an example: "Say you decide to give very aggressive promos as a part of your churn reduction or deflection flow and early data might be amazing. You see like way less number of people canceling, but what happens once the promo expires? Do they still retain or is it back to the same retention rate?"

The decision criteria after 4-6 weeks:

  • Conversion drop under 10%

  • Net revenue positive

  • No spike in cancellations or deflection attempts

  • Expansion behavior stable or improving

All four need to hold. Three out of four isn't enough, you're optimizing for short-term revenue at the cost of lifetime value.

Three Pricing Experiments to Run Before You Announce Increases

Before you email your entire customer base, test these three experiments on small cohorts. Each reveals something different about how your market will react.

Experiment 1: Price Increase on Monthly Plans Only

Start here. Monthly subscribers give you a signal in 30 days instead of 12 months.

Pick a 15% increase, roll it to 10% of new monthly signups. Track conversion and 30-day retention. If it holds, expand to 25%, then 50%.

Why monthly first? "Even if you roll out an experiment to monthly and annual customers you should be looking at the monthly cohorts," Naveen explains. 

Annual contracts lock you into long feedback loops. Monthly plans show you immediately whether the price kills conversion or customers absorb it without flinching.

The risk is contained. If it fails, you've tested on a fraction of your lowest-commitment users. If it works, you have data to bring to the annual pricing conversation.

Experiment 2: Feature-Gated Upsell Variant

Instead of raising your base price, gate a high-usage feature behind a higher tier.

Identify which feature your power users hit most often, maybe advanced exports, API access, or priority support. 

Move it from your Standard plan to Professional. Watch what happens.

This tests value perception, not just price sensitivity. If users immediately upgrade, you've confirmed the feature justifies higher pricing. If they don't, you've learned the feature isn't as valuable as you thought—without alienating your base with a price hike.

"Again, the consumers need to be able to understand and then understand their usage over time and then see what tier is good for them," Naveen notes. Feature-gating forces that evaluation.

Experiment 3: Hybrid Pricing Pilot (Base + Usage Credits)

Move 5% of new customers into a hybrid model: base subscription plus usage credits.

“I’ve seen a lot of companies shift toward a hybrid model with a committed base price each week, month, or year, and then credits layered on top.” Naveen observes.

Why hybrid works: "Predictability is the key in going for hybrid models." Customers want flexibility without surprise bills. Companies want baseline revenue without usage anxiety.

Test it small. Does it increase average revenue per user? Do customers engage more because they're not rationing usage? Or do they hate the complexity?

"It's still 50-50 in my mind when it comes to bigger teams and bigger organizations adopting given the lack of visibility or the lack of being able to predict how much the prices are going to change or vary from month to month," he adds. The only way to know if your market prefers hybrid: test it.

What Flexprice Unlocks That Makes Price Testing Safe

Running pricing experiments isn’t just a strategy problem, it’s an infrastructure problem. Most billing systems simply aren’t designed for iterative pricing work. They hard-code plans, bake logic into product code, and force teams to rebuild entire billing flows whenever they want to test even a small change.

Flexprice approaches billing differently. It gives product, growth, and finance teams the building blocks they need to run controlled experiments without breaking their billing system or rebuilding it.

Here’s what Flexprice enables founders to do safely and repeatedly:

1. Versioned Pricing and Plan Overrides

Pricing experiments require multiple variants, different price points, bundles, trial lengths, entitlements, or credit allocations. Flexprice supports this through:

  • Plan versioning

  • Custom plan creation

  • Per-customer overrides

This means you can introduce a new pricing variant, assign it to a specific customer group, and continue running your existing plans in parallel. If the experiment fails, reverting users back to their previous plan is a configuration change, not an engineering rebuild.

Flexprice provides the flexibility to iterate quickly while keeping billing logic clean and auditable.

2. Usage and Event Data for Cohort-Level Experiment Tracking

Accurate pricing experiments depend on comparing how different cohorts behave, especially when testing price increases. Flexprice’s real-time metering and event tracking give teams the granular data needed to observe:

  • early cancellations

  • downgrade patterns

  • usage drops or spikes

  • changes in credit consumption

  • renewal behavior

Because the system collects every usage event and ties it back to the subscriber, it gives teams the foundation to evaluate experiments across lifecycle stages. You’re not guessing; you’re reading actual customer behavior from the billing layer.

Flexprice doesn’t abstract away this visibility, it exposes it as raw, trustworthy data.

3. Granular, Controlled Rollouts Through Plan Assignment

Testing pricing safely requires limiting exposure. You can’t roll out a new price to your entire user base; you need segmentation.

Flexprice enables this by allowing teams to:

  • assign different plan versions to different customers,

  • create smaller test plans for specific cohorts (e.g., monthly users),

  • run overrides without touching the main pricing structure.

If you want to test a price increase on only a subset of monthly subscribers or trial users, you can do that by assigning them to an alternate plan, without changing anything for the rest of your customers.

This is the core of safe experimentation: contain impact, observe behavior, scale only when confident.

Why This Matters

Naveen’s framework only works if your billing system lets you:

  • introduce new price variants without engineering

  • limit those variants to specific cohorts

  • read usage and cancellation patterns within days

  • revert or adjust quickly

Most billing systems break at step one.

Flexprice is built to make continuous pricing experimentation — the kind Naveen described — operationally safe and technically feasible.

Pricing Isn't Something to Fear, It's Something to Test

The anxiety around price increases is real. The fear of angry customers, mass cancellations, and revenue collapse keeps founders frozen for years.

You don't need perfect answers before you start. You need the right process: narrow segments, short feedback loops, matched cohorts, and the discipline to wait for real signals instead of reacting to day-one panic.

The difference between companies that navigate this well and those that stumble? Infrastructure. The ability to test without engineering sprints. To segment without complex SQL queries. To roll back instantly if something breaks.

Flexprice is built for founders who want to run pricing experiments quickly and safely. Define custom pricing models, launch per-customer overrides, and adapt pricing over time—all without touching your core billing code. If you want to run the exact framework Naveen uses at Otter.ai, we built the infrastructure for you.

Aanchal Parmar

Aanchal Parmar

Aanchal Parmar

Aanchal Parmar heads content marketing at Flexprice.io. She’s been in the content for seven years across SaaS, Web3, and now AI infra. When she’s not writing about monetization, she’s either signing up for a new dance class or testing a recipe that’s definitely too ambitious for a weeknight.

Aanchal Parmar heads content marketing at Flexprice.io. She’s been in the content for seven years across SaaS, Web3, and now AI infra. When she’s not writing about monetization, she’s either signing up for a new dance class or testing a recipe that’s definitely too ambitious for a weeknight.

Aanchal Parmar heads content marketing at Flexprice.io. She’s been in the content for seven years across SaaS, Web3, and now AI infra. When she’s not writing about monetization, she’s either signing up for a new dance class or testing a recipe that’s definitely too ambitious for a weeknight.

Dec 9, 2025

Dec 9, 2025

Share it on:

Ship Usage-Based Billing with Flexprice

Get started

Share it on:

Ship Usage-Based Billing with Flexprice

Get started

Get Started

More insights on billing

Insights on
billing and beyond

Join the Flexprice Community on Slack