Artificial Intelligence (AI) giant Intel has once again demonstrated its commitment to implementing AI solutions everywhere. This follows the publication of results for the industry-standard MLPerf training benchmark for training AI models by MLCommons. Significant performance leaps were exhibited in the results submitted by Intel for its accelerators as well as its scalable processors.
The most recent MLPerf results for the 4th Gen Intel Xeon and Intel Gaudi2 highlight Intel’s dedication to increasingly cost-effective, high-performing AI solutions. Intel’s strain of data-centric technology, the Gaudi2, exhibited a 2x performance leap, realised through implementing the FP8 data type on the v3.1 training GPT-3 benchmark.
Sandra Rivera, Intel's executive vice president and general manager of the Data Center and AI Group, speaks to Intel's AI achievements thus far. She stated: “We continue to innovate with our AI portfolio and raise the bar with our MLPerf performance results in consecutive MLCommons AI benchmarks. Intel Gaudi and 4th Gen Xeon processors deliver a significant price-performance benefit for customers and are ready to deploy today. Our breadth of AI hardware and software configuration offers customers comprehensive solutions and choice tailored for their AI workloads.”
The importance of these recent results lies in building on the firm foothold Intel already has in strong AI performance. The Xeon processor remains the only Central Processing Unit (CPU) reporting MLPerf results. Furthermore, within the provision that there are only three accelerator solutions upon which results are based, and only two of which are commercially available, the Intel Gaudi2 stands as one of them.
The results for the Gaudi2 are significant, as this system is the only viable AI computing alternative to NVIDIA’s H100. It displays considerable price-performance and the MLPerf results for Gaudi2 showcased the AI accelerator’s increasing training performance with the use of the FP8 data type in both E5M2 and E4M3 formats, offering the option of delayed scaling when necessary.
While Intel remains the only CPU vendor to submit MLPerf results, the MLPerf findings for the 4th Gen Xeon have brought its robust performance into focus. Intel submitted results for RESNet50, RetinaNet, BERT, and DLRM dcnv2, further demonstrating that many enterprise organisations can economically and sustainably train small to mid-sized deep learning models on their existing enterprise IT infrastructure with general-purpose CPUs, especially for use cases in which training is an intermittent workload.
Intel is set on future advancements in AI performance results in forthcoming MLPerf benchmarks. This is expected to be achieved through software updates and optimisations. Intel’s AI products are aimed at providing customers with an even greater choice for AI solutions that meet dynamic requirements entailing performance, efficiency, and usability.