AI cloud costing

Wasabi’s $70M Raise Is a Bet on the Data Layer, Not Just the GPU Layer

Picture of DataStorage Editorial Team

DataStorage Editorial Team

Table of Contents

Why this round matters

Wasabi Technologies just raised $70 million in new equity funding at a $1.8 billion valuation, a round led by L2 Point Management with participation from Pure Storage and existing investors including Fidelity. On its face, it reads like a straightforward growth round for a hot cloud storage vendor, but the subtext is more interesting: Wasabi is positioning itself as part of the AI infrastructure stack, not just a cheaper place to park data.

That move matters because the AI infrastructure conversation is shifting. The first wave of coverage centered on the GPU bottleneck—how hard it is to get accelerators, how fast NVIDIA is moving, and how hyperscalers and neoclouds are racing to stand up capacity. In parallel, a second wave is forming around the data layer: pipelines, storage performance tiers, the economics of moving data between providers, and the reality that training and inference workloads are only as productive as their input data systems.

Wasabi’s funding announcement makes the company’s intent explicit: it plans to use the capital to expand further into “data infrastructure” for the AI era, grow its global footprint, and enhance product offerings aimed at enterprise and AI developer demand.

The context: neoclouds are scaling compute

To understand why Wasabi is talking about AI infrastructure right now, it helps to zoom out. The neocloud category has become one of the clearest signals that AI infrastructure is reorganizing into layers: specialized providers are emerging to deliver GPU capacity as a service, often faster or cheaper than hyperscalers for certain workloads.

JLL has estimated the neocloud ecosystem at roughly 190 operators, with major providers including CoreWeave, Nebius, and Crusoe.

This neocloud buildout is capital intensive and contract driven. Reuters has reported on neocloud providers signing massive, long-term deals and using contracted revenue to finance expansion, while also highlighting the financial risk embedded in take-or-pay structures and concentrated customer bases.

Key question for AI workflows: Where does the data live—and how does it move into GPU environments without friction, delay, or surprise cost?

That is the opening storage providers are trying to capture.

Wasabi’s product story is increasingly an AI workflow story

Wasabi has historically differentiated with hot cloud storage positioned around predictable pricing and simplicity, but its recent product moves emphasize AI-adjacent workflows, not just storage capacity.

In late 2025, Wasabi introduced Wasabi Fire, a high-performance storage class positioned for compute-intensive AI and ML training, real-time inference, high-frequency data logging, and media pipelines, using NVMe and SSD-based performance characteristics. Wasabi has also expanded its AI-oriented portfolio with Wasabi AiR, described as AI-powered metadata tagging.

This is the pattern to watch: storage vendors are no longer competing only on price per terabyte; they are competing on whether their storage tiering, performance profile, and workflow integrations can keep GPUs productive.

Wasabi itself has framed Fire as a way to address the cost and performance pressures created by AI workloads, and has emphasized the relationship between storage performance and GPU utilization, particularly for training and data-intensive pipelines.

The bigger bet: the data layer becomes infrastructure when compute becomes more liquid

In the neocloud era, compute is becoming more liquid. Teams can rent GPUs from hyperscalers, neocloud specialists, or regional providers depending on availability, cost, and contract terms. Storage behaves differently: it is the anchor layer, the stable home base for unstructured data, datasets, logs, embeddings, checkpoints, and results that move repeatedly through AI pipelines.

That creates a strategic opportunity for storage companies: become the stable layer in a multi-provider world, and win by making data movement predictable, fast, and easy to integrate into compute wherever it runs.

This is where pricing mechanics—especially egress fees—start to matter as much as list price. When data sits in one cloud and must be moved to another environment for training or inference, data transfer and egress costs can turn “cheap compute” into expensive workflows.

The major cloud providers have faced growing scrutiny and competitive pressure here. Reuters reported that Google Cloud removed certain data transfer fees when customers switch providers, with AWS making similar moves.

Wasabi has leaned into this battleground with messaging around predictable pricing and reduced fee complexity, and its Fire positioning extends that framing to performance-driven AI workloads.

What this means for the neocloud narrative

Wasabi’s raise is not a neocloud funding story in the narrow sense; it is a signal about what comes next in the neocloud cycle.

The first phase of the neocloud story was GPU scarcity and capacity buildout: get compute online, sign contracts, finance data centers, secure supply. The second phase is workflow maturation: how data moves, where it lives, how expensive it is to shuttle across providers, and whether storage platforms can serve as a neutral hub while compute becomes a competitive marketplace.

Neocloud cycle phase Primary constraint Where value shifts
Phase 1: capacity buildout GPU supply & deployment speed Compute access & contracts
Phase 2: workflow maturation Data movement, storage tiers, egress economics Data layer efficiency & predictability

Seen in that light, Wasabi’s $70M round at a $1.8B valuation looks less like a generic storage growth story and more like a bet that AI infrastructure is expanding upward from silicon into systems, pipelines, and economics. Neoclouds are building the GPU layer, but the companies that help customers feed those GPUs with data—efficiently and predictably—are going to capture an outsized share of the value created above the chips.

Related storage alternatives frequently discussed in this pricing-and-egress context include Cloudflare R2 and Backblaze.

Share this article

🔍 Browse by categories

🔥 Trending Articles

Why Storage Is the Anchor of the AI Infrastructure Stack
Newsletter

Stay Ahead in Cloud
& Data Infrastructure

Get early access to new tools, insights, and research shaping the next wave of cloud and storage innovation.