Hardware Resource Optimization with Software-Defined Storage

Carol Platz Vice President of Marketing at Lightbits Labs
Carol Platz
Technology Evangelist and Marketing VP
March 09, 2026

Remember VMware’s Project Monterey from 2020? In short, VMware had intended to offload high-demand activities to smart network interface cards (SmartNICs) and other types of accelerators. By doing so, the virtualization stack will become more efficient and feature-rich, while enabling resource disaggregation and composability. This description may seem somewhat vague, but it offers several real-world benefits for the user, including enhanced security and efficiency, as well as improved resource management.

What a difference a few years can make! Broadcom’s acquisition of VMware led to the discontinuation of Project Monterey. While the original vision of offloading high-demand activities to SmartNICs and other accelerators for increased efficiency, security, and resource management was compelling, Broadcom has shifted VMware’s focus, resulting in the project’s termination. However, the core concept of leveraging augmented hardware for software-defined storage remains highly relevant.

Software + Hardware-Defined Storage

The evolution of storage has seen a shift from purpose-built hardware systems to software-defined solutions running on standard hardware. The increasing power of general-purpose CPUs and networking components essentially accelerated this. However, with the advent of high-speed NVMe storage and storage-class memory, general-purpose CPUs have become bottlenecks in certain applications. Modern storage systems now handle massive amounts of data, requiring every component to keep up without creating bottlenecks.

This is where “augmented hardware” comes into play, enabling software-defined storage (SDS) to achieve even greater efficiency and performance. Just as general-purpose CPUs face limitations when handling varied tasks and context switching, storage-specific functions such as encryption, compaction, and data protection can stress the CPU, thereby impacting overall system efficiency. The idea is to offload these tasks.

Software + Hardware Optimized for Cloud Service Providers

Lightbits Labs exemplifies how software-defined storage can be optimized with hardware for significant benefits, particularly for cloud service providers. Lightbits is an NVMe-based scale-out software-defined solution that aggregates NVMe devices and utilizes NVMe/TCP as its front-end protocol. This combines the low latency and high performance of NVMe-oF with data services over standard TCP/IP Ethernet networks.

Lightbits with cutting-edge hardware technologies directly translate into advantages for cloud service providers:

  • High-end NVMe SSDs Write Buffer and Metadata Handling: Cloud environments demand extremely low latency for critical operations. High-end NVMe SSDs offer fast, non-volatile write buffering and efficient metadata handling, which are crucial for maintaining consistent performance under heavy workloads.
  • Ethernet 800 Series NICs for NVMe/TCP Optimization: Cloud service providers rely on high-speed networking to deliver optimal performance. These NICs are optimized for low-latency NVMe/TCP, ensuring that the massive data flow from NVMe devices is not bottlenecked by the network, resulting in improved throughput and responsiveness for virtual machines and containers.
  • QLC 3D NAND SSDs for Improved Cost-Effectiveness: For cloud providers, the $/GB metric is critical. QLC 3D NAND SSDs offer a better cost per gigabyte, enabling providers to offer competitive storage pricing while maintaining high performance.
  • Offloading Tasks to SmartNIC: Lightbits offloads a series of tasks to the Intel SmartNIC, utilizing ADQ technology for specific optimizations. This frees up the main CPU for running applications, which is essential in multi-tenant cloud environments where CPU cycles are a premium. For a cloud provider, this means higher virtual machine density per server and improved overall efficiency.
  • Utilizing the Latest Memory Options: By leveraging the latest memory options, Lightbits can deliver enhanced performance and capacity at a lower cost compared to other solutions. This allows cloud providers to offer a broader range of storage tiers and services to their customers.

For cloud service providers, these optimizations translate into several key benefits:

  • Better Performance: Faster application response times and higher I/O operations per second for their clients.
  • More Capacity: The ability to store and manage larger volumes of data more efficiently.
  • Higher Overall Efficiency: Reduced operational costs due to optimized hardware utilization.
  • Reduced Data Center Footprint: By doing more with less hardware, providers can save on power, cooling, and rack space, directly impacting their Total Cost of Ownership (TCO).
  • Flexibility and Choice: Lightbits’ approach, utilizing off-the-shelf components rather than custom ASICs, enables cloud providers to select between software-defined solutions that are fast, efficient, and cost-effective, or those with hardware acceleration for even greater speed, efficiency, and total cost of ownership (TCO) benefits. This provides significant flexibility in designing solutions that meet diverse customer needs and business objectives.

Lightbits stands as a strong example of how software can effectively utilize hardware components to deliver superior performance, a better TCO, and a quicker return on investment for cloud service providers and their customers.
Additional Resources:

About the writer
Carol Platz Vice President of Marketing at Lightbits Labs
Carol Platz
Technology Evangelist and Marketing VP