Today, MLCommons announced new results for the MLPerf Training v5.1 benchmark suite, highlighting the rapid evolution and increasing richness of the AI ecosystem as well as significant performance improvements from new generations of systems.
MLCommons Releases MLPerf Inference v5.1 Benchmark Results
Today, MLCommons announced new results for its MLPerf Inference v5.1 benchmark suite, tracking the momentum of the AI community and its new capabilities, models, and hardware and software systems.
MLPerf Releases AI Storage v2.0 Benchmark Results
San Francisco, CA — MLCommons has announced results for its MLPerf Storage v2.0 benchmark suite, designed to measure the performance of storage systems for machine learning workloads in an architecture-neutral, representative, and reproducible manner. According to MLCommons, the results show that storage systems performance ….
MLPerf Releases Storage v2.0 Benchmark Results
San Francisco, CA — MLCommons has announced results for its MLPerf Storage v2.0 benchmark suite, designed to measure the performance of storage systems for machine learning workloads in an architecture-neutral, representative, and reproducible manner. According to MLCommons, the results show that storage systems performance continues to improve rapidly, with tested systems serving roughly twice the […]
MLCommons Releases MLPerf Inference v5.0 Benchmark Results
Today, MLCommons announced new results for its MLPerf Inference v5.0 benchmark suite, which delivers machine learning (ML) system performance benchmarking. The rorganization said the esults highlight that the AI community is focusing on generative AI ….
MLCommons Releases MLPerf Inference v5.0 Benchmark Results
Today, MLCommons announced new results for its MLPerf Inference v5.0 benchmark suite, which delivers machine learning (ML) system performance benchmarking. The rorganization said the esults highlight that the AI community is focusing on generative AI ….
@HPCpodcast: MLCommons’ David Kanter on AI Benchmarks and What They’re Telling Us
Special guest David Kanter of ML Commons joins us to discuss the critical importance AI performance metrics. In addition to the well-known MLPerf benchmark for AI training, ML Commons provides ….
DDN Achieves Unprecedented Performance in MLPerf™ Benchmarking, Empowering Transformative AI Business Outcomes
DDN®, provider of the data intelligence platform, proudly announces a groundbreaking achievement in the MLPerf™ Storage Benchmark, setting new standards for performance and efficiency. DDN’s A3I™ (Accelerated Any-scale AI) systems demonstrated unmatched capabilities in multi-node configurations, solidifying its role as essential drivers for high-demand machine learning (ML) workloads and transformative business outcomes. “Our MLPerf results emphatically showcase DDN’s […]
New MLPerf Storage v1.0 Benchmark Results Show Storage Systems Play a Critical Role in AI Model Training Performance
MLCommons® announced results for its industry-standard MLPerf® Storage v1.0 benchmark suite, which is designed to measure the performance of storage systems for machine learning (ML) workloads in an architecture-neutral, representative, and reproducible manner.
New MLPerf Inference v4.1 Benchmark Results Highlight Rapid Hardware and Software Innovations in Generative AI Systems
Today, MLCommons® announced new results for its industry-standard MLPerf®Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. This release includes first-time results for a new benchmark based on a mixture of experts (MoE) model architecture. It also presents new findings on power consumption related to inference execution.










