Migrating Traditional Virtualized Apps To Kubernetes

Migrating Traditional Virtualized Apps To Kubernetes? You Can Consolidate Your Storage Too

Original article featured on Forbes

Chief Strategy Officer at Lightbits Labs, a software-defined storage company bringing hyperscale agility & efficiency to all.

If given the option, modern enterprises would gladly shed any and all dependencies on legacy applications running on proprietary compute, storage and networking hardware. The advantages of cloud-native applications running on commodity, software-defined infrastructure are now impossible to ignore, liberating enterprises to achieve unprecedented levels of management flexibility and workflow efficiency.

Cloud-native applications can’t be rivaled for agility and resiliency, and most importantly, this is the model that’s most efficiently scalable and sustainable for the future. We expect this approach to become ubiquitous as legacy architectures become impractical to maintain over time.

Enterprises are leveraging open-source container orchestration — enabled today with Kubernetes — to help fully realize the cloud-native value proposition. With Kubernetes, microservices can be flexibly composed and administered, serviced by high-performance flash storage that’s both persistent and disaggregated, allocated to pods with ease in software-defined, TCP/IP network environments. My previous post examined cloud-native storage for modern apps on Kubernetes.

Of course, most modern enterprises aren’t starting from scratch with bare metal as they adopt cloud-native applications and Kubernetes. There can be myriad legacy applications to contend with, and you can’t just snap your fingers and port them over to a Kubernetes environment.

At a large-sized enterprise, these legacy applications can number into the thousands and are often custom designed using a VMware-based management architecture. But since VMs (virtual machines) encompass the entire image — the application itself and all of the supporting elements — it would be difficult to break VMs apart for redeployment in container environments. Going forward, enterprises will likely seek opportunities to leverage container-based microservices servicing cloud-native applications, but they can’t just abandon their legacy VM-based apps.

This quandary imposes complications at the storage layer. You don’t want to maintain and manage two separate, independent storage infrastructures to support both legacy VM and modern Kubernetes environments — especially if you’re ultimately going to be running them together in an otherwise unified infrastructure.

Best Of Both Worlds

Fortunately, there are two well-established solutions for facilitating the migration of VM-based applications to Kubernetes containerized environments. Both approaches acknowledge that you’re not going to convert all of your VMs to run natively in containers, and both are designed to make it as easy and seamless as possible to manage everything in one environment.

The first approach is to run containers inside VMs, enabling you to continue running the VMs that you can’t or don’t want to convert such as VMware’s own Tanzu. Users can continue to manage their VMs under VMware and begin running any new applications natively on an application service like Tanzu for unified automation and orchestration of containerized workloads.

Alternatively, IBM’s Red Hat OpenShift takes a different approach that merits consideration. As a native container platform, it was originally designed to run everything as containers on bare metal. But with the recent advent of KubeVirt, OpenShift also supports running VMs inside containers, where they can be deployed, consumed, and managed by Kubernetes. With this approach, instead of running microservices, containers are assigned to run individual VMs. You can then attach that VM to a pod, and if the host machine goes down, Kubernetes can move the pod to different hardware.

Both approaches provide a pathway to using one unified application environment for running cloud-native apps on Kubernetes while continuing to run legacy VMs. Whether your entire legacy infrastructure is based on VMware, or you’re using OpenShift with KubeVirt to manage a handful of critical legacy VMs, storage solutions that support both approaches can eliminate the need for multiple storage infrastructures and the redundant capital expenditure and operating expense.

There are other factors to consider as well. While the storage layer should be fully optimized for use in container environments, it’s advantageous to also adhere to computing and networking constructs that VMs are familiar with. VMs tend to consume volumes, akin to what you’d create on a SAN (storage area network). Preserving this access method throughout the VM and storage layers makes it much easier to automate and administer, ultimately ensuring that VMs behave just as they did in the legacy environment.

Naturally, you might not want to maintain a Fibre Channel SAN, because you want to converge on a single fabric throughout the cloud datacenter. Ethernet remains the fabric of choice, as it’s economical and familiar to Kubernetes, VMware environments, modern cloud storage architectures, and most major web applications.

This common ethernet fabric also allows for the use of storage technologies that leverage NVMe/TCP protocols, which can be used to deliver performance that’s comparable to locally attached flash drives but extends throughout the entire storage layer. It acts like a SAN replacement, but at higher bandwidth and lower latency. While still relatively new in the tech sector, this performance profile can be exploited across a wide range of applications — databases, first and foremost.


Your evolution to cloud-native applications and Kubernetes doesn’t have to entail a separate storage architecture to accommodate legacy VM-based applications. Solutions like VMware’s Tanzu and Red Hat OpenShift with KubeVirt can help to consolidate and manage both platforms in an ultra-flexible, unified framework that preserves the value of your investment in legacy apps. This allows a clear path to go fully cloud-native with software-defined infrastructure and end-to-end, flash-caliber storage performance.

Follow me on LinkedIn. Check out my website.

Chief Strategy Officer at Lightbits Labs, a software-defined storage company bringing hyperscale agility & efficiency to all. Read Kam Eshghi’s full executive profile here.