The Overhyped Impact of New NPUs on AI: Separating Reality from Hype

The AI PC: A Reality Check

The AI PC revolution has been heavily promoted by PC makers like Intel, AMD, and Qualcomm, all seeking to capitalize on the growing demand for AI technology. It is clear that AI performance heavily relies on the NPUs (Neural Processing Units) integrated within modern chips like Intel’s Core Ultra, AMD’s Ryzen 8000 series, and Qualcomm’s Snapdragon X Elite. However, it’s important to note that the CPU and GPU still play a significant role in AI computations, even though the NPUs have been marketed as key drivers of on-chip AI.

Benchmarking AI performance is a complex task due to the diverse nature of AI applications and the variables involved, such as frameworks, models, and quantization. However, UL’s Procyon AI inferencing benchmark allows for a comparison of CPU, GPU, and NPU performance. The results revealed that while the NPU does make a difference, the GPU outperforms the NPU and CPU in AI computation.

The efficiency of the NPU is a key selling point, emphasizing longer battery life for AI applications that need to operate over time. However, it’s worth noting that AI applications can run on CPUs and GPUs without the need for a dedicated AI logic block. The NPU’s efficiency equates to AI applications that can be operated on battery power, particularly useful for tasks like filtering out noise and background activity during video conferencing.

In the current landscape, consumers do not necessarily need an NPU to run AI on their PCs, especially if they already own a gaming laptop or desktop. While NPUs from AMD, Intel, and Qualcomm are nice to have, they are not must-haves. However, chipmakers are making strides to improve NPU performance, with Intel promising a significant performance boost with the upcoming Lunar Lake chip. It is possible that NPUs will play a much larger role in AI computation in the future, but for now, the NPU is just one piece of the overall AI solution.