Ceph Storage and the NVMe Era

Carol Platz
Carol Platz
Technology Evangelist and Marketing VP
November 11, 2025

The scalable storage solution was developed in 2005 by Sage Weil and has undergone many iterations. Its most significant challenge is that, despite many improvements, Ceph lags behind modern solutions in terms of tail latency and speed in today’s hardware environments.

Sure, Ceph storage is highly scalable. It’s truly a fantastic, one-size-fits-all solution (it performs block, object, and file, for example). The problem is that it was created when hard drives ruled the day. Today’s fast flash drives with NVMe are really showing Ceph’s inherent architectural issues.

BlueStore, a back-end object store for Ceph OSDs, offers performance improvements, especially for random writes. While BlueStore improves Ceph, it can’t overcome the inherent latency multipliers in Ceph’s architecture, making it ill-suited to NVMe media. For NVMe users, the promise of BlueStore to reduce overall latency, especially tail latency, and boost performance simply isn’t realized. That’s because NVMe isn’t the bottleneck – it’s Ceph itself.

A Red Hat project last year that configured Ceph for high-performance storage showed great promise for extending its life. It should be noted that Red Hat used among the very best (expensive) CPUs in the object storage device (OSD) servers coupled with NVMe for data pools, and Optane NVMe devices for BlueStore OSDs. In the real world, that’s an expensive way to go about getting the most value from Ceph installations. It’s noteworthy that, even with Ceph tuned by experts for very high-performance systems, the testing showed an average write latency of about 3 milliseconds.

That’s where our Lightbits block storage comes in.

When Ceph Isn’t Enough

Enterprises working with the public cloud, using their own private cloud, or even simply moving internal IT to new application architectures (e.g., scale-out databases) want low latency and consistent response times. BlueStore was supposed to improve average and tail latency with Ceph, and in some respects, it does, but it cannot take advantage of NVMe. Modern architectures typically deploy local flash, usually NVMe, on bare metal to gain the best possible performance, and Ceph is a bottleneck – it simply cannot realize the performance of this new media.

Enterprises also desire shared storage, and Ceph is often used for this purpose. The drawbacks, however, are that Ceph has relatively poor flash utilization, ranging from 15 percent to 25 percent. And if there is a failure with Ceph or the host, the rebuild time can be painfully slow because there will be a great deal of network traffic for a long time.

The New Kid on the Block

In contrast, our software-defined storage solution for NVMe over TCP, LightOS, gives local NVMe performance while also acting as a shared resource. It is resilient and offers features not typically found in NVMe, such as thin provisioning and optional compression.

Lightbits works with any commodity hardware. Users can add as many SSDs as will fit. They can use standard application servers, and we offer plugins for OpenStack, Kubernetes, and more, so it can be used with those environments or with bare metal. With Lightbits, the block driver is built into the upstream kernel, and we use NVMe/TCP, which delivers NVMe performance without the need for remote direct memory access (RDMA).  This means you get great performance without having to learn about new network protocols and distinct NIC and switch settings.

With Lightbits, when a drive fails, the rebuild occurs within the chassis rather than over the network, speeding it up and causing virtually no disruption. Anyone who knows TCP can start using Lightbits and achieve incredible performance, particularly for applications that require very low latency and high I/O throughput. LightOS works great with cloud-native application environments because we have plugins for OpenStack (Cinder) and Kubernetes (CSI), and it can be used on bare metal – all while offering incredible scalability.

In Summary

While Ceph is a great choice for applications that are OK with spinning-drive performance, its architectural shortcomings make it suboptimal for high-performance, scale-out databases and other key web-scale software infrastructure solutions. There are some applications that are better served by LightOS, which can double network traffic while boosting read performance. Simply put, nothing is faster than NVMe. In fact, in head-to-head comparisons against Ceph, Lightbits showed 3x more IOPs in reads, 6x more IOPs in mixed workloads, 17x lower latency for reads, and 22x lower latency for mixed workload,s and more, all on commodity hardware and at a much lower cost

My advice? Use Ceph where it shines. It is cheap and deep – so use it for spinning, objects, and files. Use Lightbits when low latency and consistent performance are the priorities.  These two solutions can coexist to support OpenStack, Kubernetes, and bare-metal deployments. We’re happy to discuss with you how to make it happen.

Additional Resources

Ceph Storage [A Complete Explanation]
Disaggregated Storage
Kubernetes Persistent Storage
Edge Cloud Storage
NVMe over TCP
scsi vs iscsi
Persistent Storage

About the writer
Carol Platz
Carol Platz
Technology Evangelist and Marketing VP