The world of Artificial Intelligence never stands still. The demand for performance is relentless, and we believe storage infrastructure shouldn’t be a bottleneck to innovation; it should be an accelerator.
That’s why, after our successful submission to the MLPerf® Storage v1.0 benchmark last year, we’re excited to announce that we are once again stepping into the ring. We have officially submitted our results for the MLPerf® Storage v2.0 benchmark, continuing our commitment to push the boundaries of performance and efficiency.
The Real Test: Proving Progress
MLPerf benchmarks offer the transparent, peer-reviewed metrics the AI industry can rely on. For storage, the key test is simple: can the system deliver data fast enough to keep expensive accelerators fully engaged and productive?
By testing against the same models as the previous version, the v2.0 benchmark provides a stable and powerful way to measure true progress. This is a challenge we welcome. It’s a direct, apples-to-apples way to demonstrate our abilities at Lightbits. We’re particularly thrilled to share that in many training models, our performance has seen substantial gains over our previous year’s submission, with some improvements being remarkably significant.
Our Approach: Intelligent Software on Standard Hardware
Our submission is focused on demonstrating the power of a modern, software-defined architecture. Our goal is to unlock the full potential of standard, commodity hardware. The core idea is that any organization should be able to achieve amazing low latency and high throughput by deploying our software on the servers they choose, eliminating the need for specialized and expensive appliances.
While the official results are under embargo until the publication date, we can share our excitement about the outcome: we’ve successfully surpassed our own v1.0 results.
This demonstrates that an intelligent software layer, not exotic hardware, is the key to unlocking the performance needed for demanding AI training, proving the value of leveraging widely available, cost-effective commodity servers.
What’s Next?
We eagerly await the official MLCommons® announcement and the opportunity to share a deep dive into our results with you. Keep an eye on the official MLCommons® Storage benchmark page and our blog for the full results in the coming weeks. We look forward to showing you how we’re raising the bar.