As technology progresses at a breakneck pace, so too do the demands of modern applications and workloads. With artificial intelligence (AI) and machine learning (ML) becoming increasingly intertwined with our daily computational tasks, it's paramount that our reviews evolve in tandem. Recognizing this, we have AI and inferencing benchmarks in our CPU test suite for 2024. 

Traditionally, CPU benchmarks have focused on various tasks, from arithmetic calculations to multimedia processing. However, with AI algorithms now driving features within some applications, from voice recognition to real-time data analysis, it's crucial to understand how modern processors handle these specific workloads. This is where our newly incorporated benchmarks come into play.

As chip makers such as AMD with Ryzen AI and Intel with their Meteor Lake mobile platform feature AI-driven hardware within the silicon, it seems in 2024, and we're going to see many applications using AI-based technologies coming to market.

We are using DDR5 memory on the Core i9-14900KS, as well as the other Intel 14th Gen Core series processors including the Core i9-14900K, the Core i7-14700K, Core i5-14600K, and Intel's 13th Gen at the relative JEDEC settings. The same methodology is also used for the AMD Ryzen 7000 series and Intel's 12th Gen (Alder Lake) processors. Below are the settings we have used for each platform:

  • DDR5-5600B CL46 - Intel 14th & 13th Gen
  • DDR5-5200 CL44 - Ryzen 7000
  • DDR5-4800 (B) CL40 - Intel 12th Gen

(6-1) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1b) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1c) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-1d) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-2) DeepSpeech 0.6: Acceleration CPU

(6-3) TensorFlow 2.12: VGG-16, Batch Size 16 (CPU)

(6-3b) TensorFlow 2.12: VGG-16, Batch Size 64 (CPU)

(6-3d) TensorFlow 2.12: GoogLeNet, Batch Size 16 (CPU)

(6-3e) TensorFlow 2.12: GoogLeNet, Batch Size 64 (CPU)

(6-3f) TensorFlow 2.12: GoogLeNet, Batch Size 256 (CPU)

(6-4) UL Procyon Windows AI Inference: MobileNet V3 (float32)

(6-4b) UL Procyon Windows AI Inference: ResNet 50 (float32)

(6-4c) UL Procyon Windows AI Inference: Inception V4 (float32)

Regarding AI and inferencing workloads, there is virtually no difference or benefit from going for the Core i9-14900KS over the Core i9-14900K. While Intel takes the win in our TensorFlow-based benchmark, the AMD Ryzen 9 7950X3D, and 7950X both seem to better grasp the type of AI workloads we've tested.

ncG1vNJzZmivp6x7orrAp5utnZOde6S7zGiqoaenZH9yf5ZxZqKmpJq5bq%2FOq5xmoWlifnWFj2mirGWimsOqsdZmq6GdXajEorqMrKann12ks26%2BwKmrqKpdoa6ssYywoK2gXZZ6tMHPnqlmnpGowW6CjGtkoKCqYsG2vsGoZnA%3D