In recent quarters, the public SaaS index has shed over a third of its value. Growth rates are stuck below 20 percent, and the AI narrative that was supposed to revive the category isn’t translating into revenue. ServiceNow, a $14 billion ARR juggernaut growing at more than 20 percent, is down 30 percent in a month despite strong fundamentals. Why?
AI-native growth stories like Anthropic, Cursor, and OpenAI are capturing investor attention and market share. But this isn’t just about new product features. It’s a structural issue.
Legacy SaaS architecture cannot deliver AI-native experiences at scale. The foundational cloud assumptions no longer work. The cost models, workload placement, and even the pricing primitives like seat-based licensing are misaligned with what AI-first delivery requires.
Traditional SaaS companies are operating within an infrastructure model designed for a different economic climate. That model depended on high-margin, seat-based licensing, centralized deployments in a few hyperscale regions, and monolithic platforms with tightly coupled control planes.
This worked during the ZIRP (Zero Interest Rate Policy) era, when growth was more important than efficiency. In today’s cost-aware environment, those assumptions have become liabilities.
Public SaaS gross margins in the 70 to 85 percent range are now misaligned with the low-margin growth engines of AI-native companies, which often operate at 20 to 40 percent gross margins. Every dollar spent on extending centralized infrastructure makes it harder to re-architect for elastic, GPU-intensive, context-aware use cases.
Replatforming is no longer optional. It is now a matter of survival.
Anthropic grew from zero to nearly $10 billion in annualized revenue. Cursor reached $1 billion in that same time. This isn’t just because they “do AI.” It’s because they were built with AI-native infrastructure in mind from the start.
These companies use stateless, container-based delivery models. They dynamically scale GPU resources across clouds. Their workload placement is optimized for both latency and throughput. Monetization is built around APIs, not logins.
In contrast, most public SaaS giants are still managing Oracle Database schemas and battling single-tenant region sprawl.
The winners of this cycle didn’t just adopt AI as a product feature. They built their delivery and economics around AI as the core design principle.
This is where infrastructure becomes the critical story.
Gartner’s research shows that distributed hybrid infrastructure (DHI) is emerging as a preferred option for CIOs who want more flexibility. These platforms support seamless workload movement across on-prem, cloud, and edge. They offer unified control planes, consumption-based pricing, and elastic scaling capabilities.
Key DHI principles that SaaS must adopt include:
Gartner forecasts that 50 percent of enterprises will test DHI platforms by 2027, up from just 10 percent today.
Unless public SaaS aligns with this model, it risks becoming a static endpoint in an otherwise dynamic compute fabric.
1. Abandon Seat-Based Thinking
SaaS companies need to shift toward consumption-native billing. This means billing for API calls, token usage, inference minutes, or vector lookups. Revenue and performance must align with customer outcomes.
2. Architect for Elasticity and Bursting
AI workloads require ephemeral GPU access, serverless orchestration, and just-in-time scaling. Your architecture must support deployment across AWS, Azure, OCI, and sovereign clouds while minimizing latency.
3. Treat the Control Plane as a Product
Centralized SaaS stacks limit flexibility. Break the control and configuration logic into distributed APIs that operate across environments.
4. Embed Observability for Usage-Based Growth
Modern growth depends on visibility. Companies need to understand usage at the feature level, measure cost per inference, and track customer value paths, not just user logins. (See: OpenTelemetry for open observability standards.)
Audit your infrastructure for AI readiness
Identify latency bottlenecks, assess GPU constraints, and determine whether you can place inference workloads near the data source.
Evaluate DHI and edge partnerships
Explore options like AWS Outposts or Azure Stack to extend workloads to low-latency, compliance-sensitive locations.
Redefine product architecture around monetizable units
Move from user logins to atomic units like agent interactions, model queries, and vector store retrievals.
Commit to real replatforming, not just repackaging
Wrapping legacy apps in an AI interface won’t cut it. Revenue will only follow once the underlying delivery architecture is re-engineered for AI.
ServiceNow represents the best-case scenario for legacy SaaS, yet even it is struggling to convert AI narratives into revenue. At the same time, the cloud infrastructure market is moving toward distributed models, agent-based UX, and usage-tied monetization.
This is not a pricing crisis. It is an architecture crisis.
The public markets have spoken. Grow faster or get left behind. For SaaS companies, that means replatforming for an AI-native, consumption-first, hybrid cloud world.