Intel Xeon 6 Delivers up to 17x AI Performance Gains over 4 Years of MLPerf Results

September 12, 2024

INTEL

Intel Xeon 6 Delivers up to 17x AI Performance Gains over 4 Years of MLPerf Results

Intel Xeon provides a great solution for AI inference, including classical machine learning and vector search embedding.

Recently MLCommons published results of its industry-standard AI performance benchmark suite, MLPerf Inference v4.1. Intel submitted results across six MLPerf benchmarks for 5th Gen Intel® Xeon® Scalable processors and, for the first time, Intel® Xeon® 6 processors with Performance-cores (P-cores). Intel Xeon 6 processors with P-cores achieved about 1.9x geomean performance improvement in AI performance compared with 5th Gen Xeon processors.

With MLPerf Inference v4.1, Intel submitted 5th Gen Intel Xeon processors and Xeon 6 processors with P-Cores on ResNet50, RetinaNet, 3DUNet, BERT, DLRM v2 and GPT-J. Compared with 5th Gen Intel Xeon, Xeon 6 provides an average of about 1.9x better AI inference performance across these six benchmarks. Intel continues to be the only server processor vendor to submit CPU results to MLPerf.

Over the past four years, Intel has made significant gains in AI performance with CPUs since it first submitted MLPerf results. Compared with 3rd Gen Intel® Xeon® Scalable processors in 2021, Xeon 6 performs up to 17x better on natural language processing (BERT) and up to 15x better on computer vision (ResNet50) workloads. Intel continues to invest in AI for its CPU roadmap. As an example, it continues to innovate with Intel® Advanced Matrix Extensions (AMX) through new data types and increased efficiency.

The latest MLCommons benchmarks highlight how Xeon processors deliver strong CPU AI server solutions to original equipment manufacturers (OEMs). As the need for AI compute grows and many customers run AI workloads alongside their enterprise workloads, OEMs are prioritizing MLPerf submissions to ensure they deliver highly performant Xeon systems optimized for AI workloads to customers.

Intel supported five OEM partners – Cisco, Dell Technologies, HPE, Quanta and Supermicro – with their MLPerf submissions in this round. Each customer submitted MLPerf results with 5th Gen Xeon Scalable processors, displaying their systems’ support for a variety of AI workloads and deployments.

Talk to ASBIS Experts to get more on this

Отказ от ответственности: Информация, содержащаяся в каждом пресс-релизе и материале, размещенном на сайте, была действительной в момент его публикации. Несмотря на то, что пресс-релизы и другие материалы остаются на веб-сайте компании, компания не принимает на себя обязательств обновлять информацию с учетом последующих событий. Следовательно, читателям пресс-релизов и других материалов, не следует полагаться на точность и актуальность опубликованной информации после момента публикации.