Article Featured on Forbes
Chief Strategy Officer at Lightbits Labs, a software-defined storage company bringing hyperscale agility & efficiency to all.
Every modern enterprise is on a trajectory toward the cloud, where cloud-native applications take advantage of compute elasticity and management efficiencies that can’t be achieved with monolithic applications built atop legacy architectures. Modern cloud practices eliminate antiquated application workflow constraints and dependencies on proprietary or specialized compute, storage and networking hardware, dramatically reducing total cost of ownership while accelerating key IT operations on a seamlessly scalable platform.
Software-defined infrastructure is integral to the cloud-native value proposition. Leveraging commodity servers and open-source orchestration environments, huge gains in agility and cost-efficiency can be enabled. The software-defined approach allows for the fastest, most flexible and most economical deployment and configurability of compute resources in cloud infrastructure.
The evolution to cloud native naturally entails an evolution to container-based environments, where Kubernetes has effectively emerged as the industry standard for flexibly managed containers at scale. Modern web-native applications like MySQL, Apache Kafka and others are commonly deployed in pods, enabling finely tuned microservices that can be scaled up or down based on workload, automatically and with high availability.
Persistent Storage For Stateful Applications
By its nature, data created by pods is ephemeral; if a worker node fails, Kubernetes can restart the pod elsewhere to continue operations. The storage layer was likewise considered ephemeral back when Kubernetes was deployed for stateless applications. Applications didn’t need any data to initialize themselves and could shut down cleanly or abruptly without issue. The workload was simply reassigned to a new pod; it didn’t matter that the underlying storage was only temporary.
Flash forward to today, and many applications administered via microservices and containers are no longer considered stateless. Without persistent storage, a host failure would be problematic. A database needs to continue operations on an alternate pod without disruption, picking up where it left off. Persistent storage is essential.
Here’s where the core cloud philosophy of “software-defined everything” begins to be tested and requires careful consideration for those committed to achieving its full promise.
When you move to containers, you break up an application into lots of pieces. Amdahl’s law suggests that the slowest piece of any system acts as a chokepoint for the entire system; you don’t want all the other pieces waiting on just one. To maintain consistent application performance, you need to maintain a consistent storage response and most modern applications deployed in Kubernetes recommend flash as the storage medium.
But where to deploy it? Kubernetes allows for persistent storage via locally attached flash drives but offers no protection against drive failures. This approach also fundamentally breaks the core philosophy of portability whereby apps are not tied to specific hardware. Locally attached storage introduces a dependency that runs counter to the underlying value proposition of dynamic resource provisioning via software-defined cloud infrastructure.
This isn’t just a philosophical difference — it has serious real-world implications for compute scalability and agility. The storage should always follow the application wherever it’s serviced and deliver the fastest possible response time.
Solving The Storage Dilemma
Several storage solutions are available for deployment within Kubernetes itself, but the storage performance generally suffers owing to the solutions’ overreliance on replication. By running storage inside the Kubernetes framework alongside the application, these solutions also introduce a “noisy neighbor” problem. Applications from adjacent pods consume storage resources from nearby pods, impacting CPU utilization. With reserved resources, worker nodes are not 100% available for applications, requiring additional resource planning and complexity.
For these reasons and others, using a disaggregated cloud-native storage solution attached via container storage interface (CSI) plugin is recommended. In order to deploy flash storage at scale in these environments, you could find yourself locked into using expensive, proprietary flash storage arrays — and here again, you’ll run astray of the fundamental principles of cloud native, whereby everything must be software-defined for maximum efficiency.
The key is to enable local flash drive-caliber storage performance within Kubernetes using a disaggregated, dedicated storage framework that’s software-defined, that’s fault-tolerant and that supports important features like thin provisioning, snapshots and clones, among others.
This may sound daunting, but it’s readily achievable today, and ultimately, this is where cloud native is taking us for the storage layer architecture. In a couple of years, this approach will become commonplace as legacy architectures become untenable/uneconomical to maintain.
Seamlessly Scalable Flash Storage
Advancements in the standard NVMe/TCP protocol are helping to enable local flash performance for Kubernetes container environments throughout the cloud by using simple and efficient TCP/IP protocol on ethernet. iSCSI has become the default protocol for sharing block storage in such environments due to its ease and ubiquity — replacing fibre channel SAN. Like iSCSI, NVMe/TCP is easy to deploy and can use the very same networking that iSCSI does but at lower latency and higher IOPs. This standard protocol effectively harnesses the performance of locally-attached flash drives and extends that performance profile throughout the cloud storage layer.
Anywhere microservices are deployed, they can be serviced with zero compromises in application portability. The question is what level of performance is needed.
Kubernetes is at home in TCP/IP network environments today as nearly all web applications are TCP/IP based. iSCSI and NVMe/TCP preclude the need for specialized protocols, specialized adapters or switch configurations. Both leverage standard ethernet adapters, switching components and practices to provide a lower-cost solution with the network interface card you prefer. NVMe/TCP happens to supersede iSCSI in the ethernet connectivity hierarchy, but protocols can coexist on the same networks, so it’s a matter of choice based on performance needs — with NVMe/TCP delivering near local NVMe performance.
With this approach, the core benefits of cloud-native applications running on software-defined infrastructure leveraging commodity hardware with Kubernetes-enabled orchestration and unfettered application portability are preserved intact. This sets the stage for ultra-efficient application management and scalability — and all the cost benefits that come with it — into the future.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?