When Block Went Mainstream: iSCSI and the Ethernet Takeover

Robert Terlizzi
Robert Terlizzi
Director of Product Marketing
November 04, 2025

This is the sixth blog post in a 12-part series charting the storage journey — decade by decade, technology by technology — showing how the cost-per-GB cliff, how networking advances such as NVMe over TCP enable high-performance data access, and how software innovation got us from washing machines to the frictionless, fabric-native storage we’re building today.

By the early 2000s, storage lived in two worlds. Fibre Channel ruled the enterprise data center, while Ethernet ruled everything else. SANs were fast but costly; NAS was flexible but file-bound. The missing piece was a way to move block storage over the same simple, affordable networks that already carried everything else. The solution would come from TCP/IP—but it didn’t happen overnight.

The Era of SCSI-Attached JBOD

Before networked storage became the norm, most systems relied on SCSI-attached JBODsJust a Bunch of Disks. These enclosures connected directly to servers through parallel SCSI cables and host bus adapters (HBAs). For many years, this was the standard method for expanding capacity or sharing drives between clustered systems.

You could daisy-chain multiple JBODs for more disks, but SCSI had limits—short cable runs (often under 12 meters), signal integrity issues, and a hard cap on the number of devices per bus. It worked, but it didn’t scale. Multi-initiator configurations (where two servers accessed the same SCSI bus) were notoriously fragile, requiring careful termination and synchronization to avoid corruption.

Still, JBODs established a fundamental pattern: separating storage from compute, even if only by a cable. That concept—the physical disaggregation of data—set the stage for Fibre Channel and, later, Ethernet-based SANs.

Ethernet Grows Up

By the late 1990s, Gigabit Ethernet had arrived, offering throughput competitive with early Fibre Channel fabrics. CPUs had grown powerful enough to handle TCP/IP stacks without choking performance, and NICs began offloading network processing. Ethernet’s universality—already powering every LAN and WAN—made it the obvious choice for the next generation of shared storage.

Before iSCSI: Virtual Local Disks and Early IP Storage

Before iSCSI became the standard, a handful of vendors tried their own Ethernet-based block storage implementations. Network Appliance (NetApp) introduced VLD—Virtual Local Disk, which allowed servers to mount remote disk volumes across Ethernet as though they were local. It was a clever early prototype of block over IP, but it was proprietary and tied to NetApp hardware.

NetApp and others, including Nishan Systems, Cisco, and IBM, continued experimenting until they aligned behind the IETF’s emerging iSCSI standard, which was finalized in 2003. iSCSI formalized what these efforts hinted at: wrap SCSI commands in TCP/IP packets and let Ethernet carry the load.

The Fibre Channel Counterpunch: FCPoIP and FCoE

Fibre Channel vendors weren’t about to concede. EMC, Brocade, and others promoted FCPoIP (Fibre Channel over IP)—tunneling Fibre Channel frames over TCP/IP networks to extend SAN reach. It sounded brilliant: combine FC’s reliability with IP’s flexibility.

In reality, it was a performance disaster. Latency spiked, bandwidth collapsed, and interoperability was spotty. Only a few vendors ever shipped it, and even fewer customers successfully deployed it.

The next iteration, FCoE (Fibre Channel over Ethernet), eliminated the IP layer, running Fibre Channel directly on top of Ethernet frames. It was cleaner and faster—but still complex, and never cheap. By the time FCoE found its footing, iSCSI had already established itself as the baseline for practical, affordable block storage over Ethernet.

iSCSI: The Standard That Stuck

When the IETF ratified iSCSI in 2003, it filled the gap perfectly. No new fabrics. No proprietary hardware. Just Ethernet, switches, and servers everyone already owned.

iSCSI quickly became the backbone of the virtualization boom. VMware ESX (2001) and Microsoft Hyper-V (2008) both relied on shared block storage for features such as live migration, snapshots, and replication. Fibre Channel was too expensive for broad deployment, but iSCSI could scale with off-the-shelf hardware.

Gigabit and then 10 Gigabit Ethernet further narrowed the performance gap. TCP offload engines (TOE), jumbo frames, and smarter switches helped iSCSI achieve low-latency, high-throughput performance on par with many Fibre Channel deployments.

Unified Fabrics and the Decline of Fibre Channel Dominance

By the late 2000s, the world was tired of maintaining separate networks for compute, storage, and data. Ethernet doubled its speed every few years while Fibre Channel crawled from 2 Gbps to 8 Gbps over nearly a decade. Data centers began converging onto unified fabrics—single Ethernet backbones carrying all workloads with VLAN and QoS isolation.

Converged Network Adapters (CNAs) arrived to handle TCP/IP, iSCSI, and FCoE simultaneously. The message was clear: Ethernet had won, and storage had become a first-class citizen on the same network that carried everything else.

Business Impact: The Democratization of Shared Storage

iSCSI changed who could participate in enterprise computing. Startups, small businesses, schools, and labs could finally afford shared block storage without the cost or expertise of Fibre Channel. Enterprises used it for dev/test clusters, remote sites, and even production workloads. Service providers built early cloud offerings on iSCSI-backed arrays, and research institutions connected clusters across campuses using IP-based storage links.

The impact was profound: shared storage was no longer an enterprise luxury—it was a commodity foundation. This democratization accelerated the adoption of virtualization, web applications, and SaaS, giving rise to the cloud era itself.

The Path to NVMe/TCP

iSCSI didn’t just survive; it set the philosophy that underpins NVMe/TCP today: keep it simple, keep it standard, and let Ethernet do the heavy lifting.

NVMe/TCP is, in many ways, iSCSI reborn for the flash age—same transport, same accessibility, but measured in microseconds instead of milliseconds. It’s the logical endpoint of 30 years of evolution: from JBODs tethered by short SCSI cables, to SANs, to unified Ethernet fabrics moving data at the speed of light.

Why It Mattered

The iSCSI revolution didn’t just simplify storage—it unleashed innovation. Without inexpensive, resilient, Ethernet-based block storage, virtualization and the Internet economy wouldn’t have scaled. Data became elastic, services became continuous, and infrastructure became software.

If Fibre Channel had kept its monopoly, the cloud might still be a PowerPoint concept. iSCSI turned block storage from a specialty product into a utility—and that utility became the platform for everything that followed.

Next Up

Post-Dot-Com Storage Diet: 2001–2008 Consolidation & Tiering—how virtualization, flash, and smarter software turned commodity hardware into elastic infrastructure and set the stage for the modern cloud.

To learn more, read more blogs from this series:

About the writer
Robert Terlizzi
Robert Terlizzi
Director of Product Marketing