This is the ninth blog post in a 12-part series charting the storage journey — decade by decade, technology by technology — showing how the cost-per-GB cliff, networking advances such as NVMe over TCP enable high-performance data access, and software innovation got us from washing machines to the frictionless, fabric-native storage we’re building today.
There are moments in this industry where everything tilts at once—where new technology drops into the ecosystem and forces every vendor, every architect, and every storage team to reevaluate their strategy. 2016 to 2019 was exactly that kind of moment.
NVMe-over-Fabrics had just landed, Ethernet was hitting warp speed, RDMA went mainstream (for better and for worse), Fibre Channel fought to stay relevant, InfiniBand held its crown in HPC and AI, and everyone had an opinion about which fabric would own the future.
I watched it all unfold in real time—and I had no idea that the next few years would give me a front-row seat to an even bigger industry shift.
NVMe-oF Lands and the Gameboard Reconfigures
When the NVMe-over-Fabrics spec was finalized in 2016, the industry immediately understood one thing: we had crossed a threshold.
NVMe had already exposed how limited SCSI had become in a flash-first world. NVMe-oF simply extended that potential across the network. Parallel queues, deep command structures, low-latency signaling—suddenly accessible beyond the PCIe bus.
It wasn’t about incremental improvement. It was about asking the entire ecosystem to rethink how fast storage should move.
And that’s when the competing camps dug in.
RDMA: Powerful, Impressive, and Operationally Demanding
RoCE promised incredible performance—but only if the network team tuned everything from PFC to congestion domains with watchmaker precision. iWARP delivered RDMA without the headaches, but it never built the same ecosystem momentum.
InfiniBand didn’t participate in the debate. It didn’t need to. It simply delivered microsecond latency at scale with the consistency every HPC team dreams about.
RDMA worked brilliantly in disciplined environments. But it demanded a kind of operational rigor many enterprises weren’t staffed or structured to provide.
Fibre Channel Tries to Reinvent Itself
Fibre Channel wasn’t ready to fade quietly. FC-NVMe offered predictable latency and a mature, stable fabric. But Ethernet was evolving too quickly, too cheaply, and with too much ecosystem support.
FC-NVMe found a home—but not a renaissance.
The TOE Card Funeral
There was a time when TCP Offload Engine (TOE) cards were marketed like the industry’s salvation. But by 2018, CPUs and NICs had grown so capable that TOE wasn’t just unnecessary—it was obsolete.
DPDK, SR-IOV, and kernel-bypass technologies turned TCP/IP into a surprisingly efficient high-performance transport. TOE was over.
Ethernet Becomes the Gravity Well
Ethernet’s evolution was relentless. 25 GbE became standard, 40 and 100 GbE took over at the high end, and hyperscalers moved rapidly into 200 and 400 GbE.
Switch silicon dropped into single-digit microsecond latencies. ASIC pipelines grew smarter, deeper, and more efficient. TCP/IP—once considered the bottleneck—became fast enough to challenge specialized fabrics.
Ethernet didn’t win by being perfect. It won by improving faster than anything else.
InfiniBand: The Specialist That Stayed the Specialist
InfiniBand remained the only fabric that delivered consistent, low-latency, massively parallel performance for HPC, AI, and advanced research workloads. It never chased the enterprise. It didn’t need to.
NVIDIA Buys Mellanox — and the Industry Finally Gets the Hint
When NVIDIA acquired Mellanox in 2019, the industry finally understood what was coming.
NVIDIA didn’t want a NIC business. They wanted complete vertical control over the next generation of AI and HPC infrastructure:
- GPUs generating the data
- NICs and switches moving the data
- RDMA engines accelerating the data
- Software orchestrating the entire stack
It was the clearest possible signal: networking—not compute—would be the bottleneck in modern AI.
InfiniBand was no longer a niche HPC technology. It became part of NVIDIA’s end-to-end AI supercomputing empire.
NVMe/TCP Quietly Becomes the Enterprise Favorite
While the world debated RDMA fabrics, a quieter, simpler transport emerged around 2018: NVMe/TCP.
No special NICs. No fabric tuning. No PFC. No RDMA complexity.
Just NVMe semantics over TCP/IP—on the Ethernet networks that organizations already operate.
It didn’t win with marketing. It won with practicality.
From Reduxio to VAST to Dell: A Front-Row View of the Industry’s Evolution
Before the VAST and Dell chapters of my career, there was Reduxio — and honestly, it deserves its own footnote in storage history.
From 2015 to 2018, Reduxio was one of the fastest-growing storage startups in the industry. And it wasn’t because we were loud or trendy — it was because the technology was legitimately ahead of its time.
Reduxio delivered a hybrid flash–spinning disk architecture that behaved nothing like the tiered hybrids everyone else was pushing. Our deduplication and compression engines were shockingly effective for the era. Our replication looked instant to the user — true time-indexed data mobility that felt like magic. And our rollback engine wasn’t just “snapshots” — it literally behaved like rewinding a VCR one second at a time, transaction by transaction.
On top of that, the UI was exceptional — modern, intuitive, clean — and the REST API made automation feel effortless long before “infrastructure as code” became mainstream.
For a moment, we had lightning in a bottle. Reduxio could have become what VAST Data would eventually grow into — but like many fast‑moving startups, the company hit a period of internal growing pains and strategic misalignment that slowed its trajectory. The technology was visionary, the team was exceptional, and the ideas were ahead of their time — the timing and organizational alignment simply weren’t there yet.
And here’s the irony: many of the brightest engineers and architects from Reduxio went on to become foundational contributors at VAST Data. That DNA — the obsession with simplicity, elegance, and performance — made the leap.
So when I later joined VAST in late 2019, it wasn’t shocking to see ideas I’d watched incubate at Reduxio flourish within a platform designed to reach exabyte-scale.
How These Experiences Shaped My View of the Fabric Landscape (2019–2024)
Coming out of the fabric wars, I had a unique vantage point as I moved directly into two roles that showed me exactly where storage was heading next.
From late 2019 through early 2022, I was at VAST Data, helping bring the platform from MVP to full production. It wasn’t “just storage.” It was a new philosophy: high-performance, NFS-first flash that could scale to exabytes.
We were putting petabytes of QLC flash into Pixar, The Electric Car Company, Yahoo, hedge funds, global finance, research labs—anyone who needed bulletproof NFS performance, consistent low latency, and a platform that acted like a true data service, not an appliance.
NFS performance was phenomenal. SMB was maturing quickly. Object was under construction. And the global namespace tied it all together. Management was simple, the architecture was elegant, and the platform showed what flash-centric scale-out storage was capable of.
Then, from 2022 to 2024 at Dell, leading product for ECS, OBS, and PowerScale, I saw the macro trend up close.
Customers no longer cared about “NAS vs. object vs. block.” They cared about:
- Consistency across wildly different I/O patterns
- Elasticity without complexity
- Global namespaces that abstracted protocols
- API-first workflows
- Anything that simplifies management
Cloud-native models were changing everything. Protocol debates faded. What mattered was how efficiently you could place, deliver, scale, and orchestrate data.
The winning formula became obvious: the easiest management, the fastest expansion, the most efficient fabric usage, and consistent low latency across many I/O types.
The Landscape After the Dust Settled
By 2019, the fabric hierarchy was clear:
- InfiniBand ruled HPC and AI clusters
- Ethernet ruled the data center and cloud
- Fibre Channel held ground in conservative enterprise SANs
- RoCE excelled in tuned, homogeneous environments
- NVMe/TCP became the future of scalable enterprise block storage
- Object quietly powered global-scale capacity
There was no single winner—there was a map. But the industry’s momentum pointed firmly toward Ethernet for mainstream workloads and InfiniBand for the bleeding edge.
Why This Era Mattered
This era taught the industry its most important lesson:
Operational simplicity and economic gravity will always outpace theoretical purity.
NVMe/TCP didn’t arrive as a disruption. It arrived as the logical outcome of everything the industry learned from 2016–2019.
To learn more, read more blogs from this series:
- When Storage Was a Washing Machine: 1950s Data at Full Spin
- From Jukeboxes to Jet Age Disks: 1960s Storage Takes Off
- The Filing Cabinet Gets a Makeover: Winchester Drives, 1973
- The Server Room Tool Chest: SCSI, RAID, and the 1980s Storage Boom
- Ethernet Meets the Filing Cabinet: NAS and SAN in the Early ’90s
- Post-Dot-Com Storage Diet: 2001 – 2008 Consolidation, Continuity & Control
- The iPhone Decade: Flash Storage Reshapes Performance Expectations (2009–2015)