IT organizations are starting to replace their storage area networks (SANs), which have been a predominant mode of storage for many years. Newer alternatives, such as software-defined storage (SDS), offer a range of attractive qualities. While SAN can also be software-defined, it comes at the expense of having to invest in proprietary hardware. Each IT organization should assess the suitability of SAN with regards to cost, performance and capacity needs, and scalability requirements and make a determination to keep or replace it. This blog explores the issues and drivers for considering a SAN replacement.
What is a Storage Area Network (SAN)?
A SAN is a network of storage devices that offers a pool of shared block storage for multiple computing devices. SANs first appeared in the mid-1990s as a viable alternative to Direct Attached Storage (DAS) and NAS (Network Attached Storage) for high-speed, mission-critical transactional workloads like databases that require scalable storage that can deliver high IOPS and low latency. They are also well-suited for virtualized environments which accelerated their adoption.
A SAN consolidates storage in a single block-level storage area, allowing users to access and manage data from a central location. To maintain high storage traffic and network performance, the SAN is typically implemented via a separate network infrastructure from the local area network (LAN).
Compute servers connected to the SAN gain access to whatever storage devices are on SAN controllers, such as tape libraries, local storage, and disk arrays. This design also offers the advantage of centralized storage management. SANs also potentially help with improved storage security. With data in a centralized, shared SAN storage architecture, an organization can apply consistent policies for security, data protection, and disaster recovery (DR). A SAN potentially enables multiple backups of its data, and its block-level access helps improve application availability.
The design of the SAN also lends itself to dynamic fail-over, which helps with availability and business continuity. The SAN’s network fabric of interconnected storage devices and computers further improves availability. If one network path is disrupted, the SAN enables an alternate path. This way, it’s less likely that the failure of a single device will render storage inaccessible.
The Two Main Technologies and Interfaces for SAN
SANs usually use one of two main technologies to move data in and out of storage: Fibre Channel and Internet Small Computer Systems Interface (iSCSI). Fibre Channel is a high-speed data transfer protocol. It provides lossless delivery of raw block data on an “in-order” basis. Fibre Channel is typically based on optical fiber cables, but they can also use copper cables. (The word “fibre” was adopted by the industry rather than “fiber” to avoid confusion over whether the protocol can only run on fiber optic equipment.)
Fibre Channels support data rates that include 1, 2, 4, 8, 16, 32, 64, and 128 gigabits per second. Architecturally, the switches in a Fibre Channel network operate in unison, effectively as one big switch that comes together to form a switched fabric.
iSCSI is a transport layer protocol operating atop of Transport Control Protocol/Internet Protocol (TCP/IP). With this design, iSCSI makes possible block-level SCSI data transport between two components, the iSCSI initiator and the storage target, over TCP/IP networks. SCSI is a block-based command set connecting devices to networked storage. The iSCSI target can be a SAN controller, which exposes remote volumes that appear as local drives to host systems.
iSCSI is generally less costly than Fibre Channel because it connects servers to storage without requiring expensive Fibre Channel Host Bus Adapters (HBAs), switches, or cabling. Fibre Channel SANs also require admins who have specialized skills. In contrast, administering an iSCSI SAN, which runs on standard, existing Ethernet, is simpler. An IT generalist can easily learn to install and manage an iSCSI SAN.
Disadvantages of SAN
Their widespread use notwithstanding, SANs have a number of disadvantages. For one thing, SANs are expensive, they can cost hundreds of thousands of dollars and require proprietary hardware. Because they require proprietary hardware, their provisioning cycles can be long, especially during times when supply chains are compromised (e.g., during the COVID pandemic). The cost of setting up and maintaining the infrastructure can be significant. It can take some time before you see a return on investment. This makes SANs better suited for larger organizations that can afford to invest in capital and management costs.
They can be complex and thus difficult to manage. A SAN is built in layers, with connections between the underlying storage arrays, the SAN network switches, and the servers that use the SAN. Each layer, device, and connection requires ongoing administration and maintenance, incurring additional costs.
There are multiple locations for faults and patching. Component upgrades and interfaces need frequent, if not constant attention. Indeed, various SAN components are not known for “playing nicely” together. SAN vendors often use proprietary protocols and management tools, which further complicate the task of SAN management. Additionally, SAN becomes a bottleneck in all-flash storage environments, which are now becoming the norm in most enterprises.
The complexity of SANs can require specific expertise to manage and maintain. Overseeing the SAN inevitably becomes a job for someone or even an entire team. When an organization has more than one SAN the complexity and administrative load grow all the more strenuous. A virtualized SAN architecture can relieve some of the pressure on admins to take care of hardware but presents its own complexity at the same time.
Security is also an issue, despite the uniform policy advantages highlighted above. A SAN is almost always a shared environment. As a result, it is vulnerable to lateral attacks, where a malicious actor gains access to one area of the SAN but then moves across it to breach data held elsewhere in the network.
Why should you replace SAN?
Given these disadvantages of SAN, it may be time to think about moving on to another approach to storage. With SDS, IT organizations now have a viable alternative to replace their SANs. For example, it is now possible to build a storage solution using software-defined, NVMe®/TCP block storage that provides the storage pooling advantages of SAN and delivers high performance and low latency without the complexity and high costs of SANs.
Instead, using a solution like Lightbits that leverages low latency NVMe storage and standard TCP/IP, IT organizations can deploy high-performing, clustered storage that is cost-effective and highly scalable—but without SAN’s traditional overhead headaches.
Replacing a SAN is not a minor project, so it pays to think through the pros and cons of undertaking such a task. What’s clear, however, is that new approaches to storage can do everything that a SAN can do, but without the complexity and at a lower cost. It may be time to look forward to the post-SAN era.
Related Blogs: