This is the eighth blog post in a 12-part series charting the storage journey — decade by decade, technology by technology — showing how the cost-per-GB cliff, networking advances such as NVMe over TCP enable high-performance data access, and software innovation got us from washing machines to the frictionless, fabric-native storage we’re building today.
If the early 2000s were about storage learning discipline after the dot-com crash, the next era was about storage getting smacked in the face by expectation. In 2009, when the iPhone 3GS landed and the mobile web suddenly felt fast, modern, and addictive, something shifted in the collective psyche: the era of instant gratification and zero patience had begun.
That cultural shift spilled directly into enterprise IT. Users expected services to load instantly. Developers built apps assuming “real-time” was the only time. CIOs nodded along like this was always the plan. But under the hood, it was storage — the thing that had always been the slowest part of the stack — that suddenly had to behave like it had been preparing for this moment its whole life.
It hadn’t. But it learned quickly.
Flash Kicks Down the Door
Flash had existed for years in the shadows — tucked into controllers, riding PCIe as Fusion-io cards, or bolted onto servers as tiny SLC accelerators. But around 2008–2011, SSDs matured rapidly. Intel’s X25-E proved that enterprise flash could be reliable. Fusion-io’s cards delivered microsecond-class performance. Vendors like Texas Memory Systems, Violin Memory, Nimbus, and Pure Storage (with FlashArray emerging in 2011) started shipping all-flash systems that didn’t just outperform disk—they embarrassed it.
Suddenly, Flash wasn’t a “tier.” Flash equated to performance. And once customers saw shared storage respond in sub-milliseconds, going back to spinning media felt like downgrading from broadband to dial-up.
It wasn’t just a technology upgrade; it was a psychological one.
Virtualization Grows Up — and Storage Has to Follow
While flash was making its entrance, virtualization was hitting adolescence.
vSphere 4 in 2009 and vSphere 5 in 2011 brought features that fundamentally reshaped storage’s job description: vMotion everywhere, instant clones, snapshots that didn’t immediately crater the array, Storage vMotion, clustered hosts stretching across racks or sites…the works.
Storage arrays that were designed for a handful of predictable hosts suddenly had hundreds of VMs hammering them at once. It was chaos disguised as progress.
And then came VMware’s notorious litmus test: Eager Zeroed Thick.
Before flash, provisioning an EZT VMDK felt like filing paperwork at the DMV — long, painful, and done only when absolutely necessary. Arrays had to pre-zero every block. It sucked.
Flash made EZT instant. And once VMware admins got a taste of “instant,” they wanted everything instant — provisioning, clones, failovers, boot storms, VDI… all of it.
Storage didn’t just have to keep up; it had to grow up.
Ethernet Gets Fast — and More Importantly, It Gets Low Latency
Between 2009 and 2015, Ethernet evolved faster than any other fabric in the data center.
10 GbE moved from exotic to mainstream. 40 and 100 GbE were standardized and began showing up in real deployments. 25 and 50 Gb Ethernet designs solidified mid-decade. But the real story wasn’t bandwidth — it was latency.
Cut-through switching replaced the old buffered delay-fests.
ASIC pipelines became smarter, leaner, and more deterministic.
Microburst handling stopped acting like a toddler on a sugar crash.
SR-IOV and DPDK turned TCP/IP from a “good enough” protocol into a legitimate low-latency workhorse.
Ethernet stopped pretending to be the budget option and started acting like a fabric that wanted a seat at the high-performance table.
And a lot of storage admins looked at Fibre Channel and asked a very uncomfortable question:
“If Ethernet keeps getting this fast… why are we still paying Fibre Channel prices?”
InfiniBand: The Quiet Monster in the Corner
While Ethernet and Fibre Channel were trading blows, InfiniBand was off in its own universe, embarrassing both of them.
InfiniBand didn’t argue about standards, versions, or committees. It just delivered: microsecond latency, massive parallelism, and a networking model so fast it made everything else look stuck in traffic.
In HPC, scientific computing, financial trading, and early GPU clusters, InfiniBand wasn’t just winning — it was the answer. It never became mainstream in traditional enterprise IT (too esoteric, too specialized), but it absolutely set the bar for everyone else.
And here’s the part most people forget:
Almost everything modern fabrics brag about today came from InfiniBand first — RDMA, kernel bypass, user-space networking, even design philosophies that guided NVMe-over-Fabrics.
InfiniBand didn’t lose. It simply moved to a higher league and stayed there.
Protocols Evolve Under Pressure: Block, NAS, Object
Flash, virtualization, and low-latency networking all collided at once, and suddenly the protocol debate wasn’t theoretical — it was existential.
Block storage (Fibre Channel, iSCSI) remained the performance king, delivering predictable latency for databases and VMs.
NAS got a glow-up. NFSv4.x, SMB 2/3, and scale-out NAS (Isilon, Clustered ONTAP) proved that file services could scale and still deliver real throughput.
Object storage, after Amazon S3 launched in 2006, shifted from a backup novelty to the backbone of the early cloud: an infinite namespace, S3 APIs, and durability models that traditional systems couldn’t match.
Enterprises didn’t choose one. They chose all three, each for what it did best.
This was the birth of the hybrid era.
The Disk That Refused to Die
As flash took the spotlight, many people confidently predicted the “death of the hard drive.”
HDD manufacturers responded by lighting that prediction on fire.
From 2009 to 2015:
- Perpendicular magnetic recording shattered density ceilings
- HGST launched helium drives (~2013), breaking efficiency and capacity limits
- 4K sector formats improved reliability
- SAS expanders enabled enormous disk shelves
- 2 TB drives became 4 TB, then 6 TB, then 8 TB
Flash may have owned performance, but HDDs owned economics — and they kept the internet fed with cheap capacity while flash matured.
The imbalance widened massively: per-TB capacity skyrocketed; per-TB IOPS fell off a cliff. That performance gap didn’t kill HDD — but it made flash essential.
Scale-Up Dies; Scale-Out Becomes Doctrine
The decade also exposed every weakness in monolithic storage arrays.
Controller CPUs capsized under load. Rebuild times became weekend-long dramas. Firmware upgrades required prayer and scheduled downtime.
Scale-out architecture fixed all of it.
By distributing data and metadata across many nodes, scale-out systems made performance and capacity grow together. Add a node → get more CPU, RAM, I/O, and disk.
Isilon, NetApp GX / Clustered ONTAP, Ceph storage, Swift, HDFS, and even early object storage systems all followed this philosophy.
By the early 2010s, scale-out wasn’t innovation — it was survival.
And it became the architectural blueprint for everything we now call “cloud.”
The Control Plane Grows a Brain
Storage appliances and SDS stacks got dramatically smarter.
Inline dedupe and compression became the default.
Thin provisioning moved from “scary” to “obvious.”
Metadata got global.
Snapshots became instant.
Tiering became automated.
And everything became programmable via APIs.
Storage wasn’t a box anymore. It was a service — intelligent, orchestrated, multitenant, and increasingly self-managing.
This intelligence made storage feel modern for the first time since the Winchester days.
Moore’s Law Outruns SCSI — NVMe Is Born
By now, CPUs were flying. Flash latencies were diving. PCIe was exploding in throughput.
SCSI? SCSI was a 1980s protocol trying to keep up with 2010s silicon.
NVMe arrived precisely when the world needed it:
- Dozens of parallel queues
- Tens of thousands of commands
- Lightweight, low-overhead operations
- PCIe-native design
- Perfect alignment with multi-core CPUs
NVMe didn’t raise the performance ceiling. It vaporized it.
NVMe-oF and the Road to NVMe/TCP
By the mid-2010s, NVMe was crushing it locally. Naturally, everyone asked:
“If NVMe is this fast inside the server, why not across the network?”
That question led directly to:
- NVMe-over-Fabrics (NVMe-oF) — standardized in 2016
- First transports: RDMA (RoCE/iWARP/InfiniBand)
- FC-NVMe standardized soon after via T11.
- NVMe/TCP emerged around 2018 as the Ethernet-native, no-special-hardware option
These wouldn’t become production realities until after this decade — but they were absolutely born because of this decade.
Flash dominance, Ethernet’s low-latency evolution, scale-out design, and consumer-driven impatience all made NVMe/TCP inevitable.
Why This Decade Mattered
Because everything changed — simultaneously, Flash redefined what “fast” meant. Ethernet learned how to behave. InfiniBand set a bar that the rest of the world had to chase. Virtualization pushed arrays to their limits. Scale-out became gospel. Object storage matured. Moore’s Law crushed SCSI. NVMe was born.
This wasn’t just technological evolution — it was the decade when storage stopped being infrastructure and became an expectation.
To learn more, read more blogs from this series:
- When Storage Was a Washing Machine: 1950s Data at Full Spin
- From Jukeboxes to Jet Age Disks: 1960s Storage Takes Off
- The Filing Cabinet Gets a Makeover: Winchester Drives, 1973
- The Server Room Tool Chest: SCSI, RAID, and the 1980s Storage Boom
- Ethernet Meets the Filing Cabinet: NAS and SAN in the Early ’90s
- Post-Dot-Com Storage Diet: 2001 – 2008 Consolidation, Continuity & Control