Kubernetes Killed the Local Disk

Robert Terlizzi
Robert Terlizzi
Director of Product Marketing
December 16, 2025

This is the tenth blog post in a 12-part series charting the storage journey — decade by decade, technology by technology — showing how the cost-per-GB cliff, networking advances such as NVMe over TCP enable high-performance data access, and software innovation got us from washing machines to the frictionless, fabric-native storage we’re building today.

How Containerization Became the Launchpad for Next-Gen Storage (2020–2022)

There wasn’t a single announcement or product launch that killed the local disk. There wasn’t a press release or a vendor keynote declaring its death.

Kubernetes just stopped caring about it.

And once the scheduler stopped caring, everything else followed.

By the time Kubernetes reached mainstream enterprise adoption, it rewrote one of storage’s longest-standing assumptions: that data belonged to a server. Containers were ephemeral, nodes were disposable, and locality stopped being sacred. If your storage depended on a specific box, rack, or host, you were already behind.

Kubernetes didn’t set out to disrupt storage. It simply demanded a world where storage had to keep up.

When Servers Became Optional

Virtual machines abstracted servers. Kubernetes abstracted everything.

Pods moved freely. Nodes failed without warning. Clusters scaled horizontally as a matter of routine. Infrastructure teams stopped asking “where does this run?” and started asking “how fast can this recover?”

In that world, local disk became a liability.

Yes, local NVMe was fast—blazingly fast—but speed without mobility doesn’t survive a scheduler that treats hardware as cattle. If a workload can land anywhere, storage has to be available everywhere. Performance without portability simply doesn’t work in a Kubernetes environment.

This is where the industry finally internalized something storage architects had been circling for years:

The network, not the server, is the new center of gravity.

CSI Didn’t Make Storage Better — It Made Weak Storage Obvious

The introduction of the Container Storage Interface (CSI) was a forcing function the industry didn’t fully appreciate at first.

CSI turned storage into a contract.

No more bespoke drivers. No more vendor-specific hacks. No more “trust us, it works.”

Kubernetes demanded dynamic provisioning, snapshots as APIs, replication as policy, failure handling that didn’t require human intervention, and attach/detach behavior that worked every single time.

CSI didn’t magically improve storage platforms. It exposed which ones were architected for automation—and which ones were barely holding together under orchestration pressure.

Cloud Services Quietly Stole the “Easy” Storage Use Cases

At the same time, Kubernetes was reshaping infrastructure, and cloud services were quietly dismantling traditional enterprise storage workloads.

Home directories—once a core NAS use case—were commoditized by Box, OneDrive, and Google Drive. SharePoint storage moved into Microsoft’s cloud because it was cheaper, simpler, and tightly integrated with the applications people actually used. Internal documentation abandoned file shares entirely in favor of Confluence and SaaS-based knowledge platforms.

Enterprise storage didn’t disappear. It stopped being responsible for the boring stuff.

What remained were the workloads that actually mattered: databases, analytics platforms, AI and ML pipelines, media and research data, regulated financial and healthcare workloads, and large-scale data services that couldn’t live entirely in SaaS.

Storage stopped being horizontal plumbing and became application infrastructure.

Flash Economics Collapsed the Tiering Religion

For years, storage architecture revolved around tiering: flash for hot data, disk for warm, object or tape for cold. It made sense when flash was expensive and scarce.

After 2015, that logic started to fall apart.

Flash prices dropped fast enough that the question stopped being “what data deserves flash?” and became “why isn’t all active data on flash?” The operational overhead of managing tiers often outweighed the savings, especially when applications demanded consistent latency across unpredictable access patterns.

Tiering didn’t disappear—but it stopped being the center of the universe. Consistency mattered more than optimization.

AI Changes the Read Pattern Forever

AI workloads don’t politely access a hot working set while leaving the rest alone. They read everything—repeatedly, in parallel, and without warning.

Training pipelines, feature extraction, and inference workflows all assume that any data, anywhere in the infrastructure, might need fast read access immediately. The idea of “cold data” becomes increasingly irrelevant.

This reality pushes storage design toward flash-first architectures, uniform latency profiles, global namespaces, efficient use of the network fabric, and protocols that scale without friction.

If everything might be read at high speed, everything needs to live on infrastructure that can deliver it.

A Front-Row Seat: Reduxio, VAST, and Dell

I didn’t watch this evolution from the sidelines.

From 2015 to 2018 at Reduxio, we were already challenging assumptions: hybrid flash and disk that behaved nothing like traditional tiered systems; aggressive deduplication and compression; near-instant replication; and the ability to roll the data journal backward one second at a time—like rewinding a VCR through transactions. Paired with a modern UI and a clean REST API, it was clear that software intelligence could matter as much as raw hardware.

From late 2019 through early 2022 at VAST Data, I watched that philosophy scale. We launched a flash-first, NFS-centric platform designed to scale to exabytes, placing petabytes of QLC flash into environments such as Pixar, Tesla, Yahoo, hedge funds, and global financial institutions. Performance was consistent. Management was simple. The global namespace mattered more than the protocol.

From 2022 to 2024 at Dell, working across ECS, ObjectScale, and PowerScale, the macro pattern became impossible to ignore. Customers weren’t debating NAS versus object versus block anymore. They were asking one question:

What’s the simplest way to get consistent performance at scale without locking ourselves into a corner?

Protocols Fade, Outcomes Matter

Kubernetes doesn’t care if your data is block, file, or object. Applications increasingly don’t either.

Global namespaces matter. Elastic expansion matters. API-first design matters. Efficient use of the network fabric matters.

Protocol dogma does not.

What wins is whatever delivers predictable low latency, scales linearly, survives chaos, and doesn’t require a small army to operate.

What Enterprises Actually Need Now

After three decades in this industry, the requirements are brutally simple.

Enterprises need storage that is cheap, fast, reliable, and scalable—all at the same time.

Security is table stakes and increasingly lives in the cloud-native control plane and the network layers, not inside proprietary array features that break the moment you integrate with modern workflows.

Performance is king.

Latency isn’t a metric anymore—latency is death. It kills application performance, AI pipelines, and business velocity long before anyone files a ticket.

Lock-in is just as dangerous. Every proprietary fabric, closed management plane, or inflexible architecture eventually explodes the cost model. What works at 100 TB becomes painful at a petabyte and catastrophic at scale.

The winners in this era are platforms that deliver consistent low latency across unpredictable I/O patterns, linear scale without architectural gymnastics, efficient use of standard network fabrics, simple API-driven operations, and the freedom to evolve without ripping out the foundation every few years.

That’s why network-native, flash-first, scale-out storage has become the default. That’s why Ethernet-based NVMe architectures fit this world so cleanly. And that’s why Kubernetes didn’t just kill the local disk—it crowned the network.

Why This Moment Matters

Kubernetes didn’t make storage harder. It made weak assumptions impossible to hide.

It forced storage to become mobile, resilient, programmable, and fabric-efficient. It accelerated the shift away from boxes and toward platforms. And it set the stage for the next era—where the network, the protocol, and the control plane define everything.

Next Up

800G and Climbing — The High-Performance Ethernet Era (2023–2025)

Where AI, GPUs, massive east-west traffic, and fabric efficiency become the defining constraints of modern infrastructure.

To learn more, read more blogs from this series:

About the writer
Robert Terlizzi
Robert Terlizzi
Director of Product Marketing