AI Realities: The Role of NPUs in PC AI and Benchmarking Insights

  • Home
  • AI Realities: The Role of NPUs in PC AI and Benchmarking Insights

AI Realities: The Role of NPUs in PC AI and Benchmarking Insights

In recent months, the tech industry, led by Intel and PC manufacturers, has been championing the era of AI-enabled PCs, aligning with AMD and Qualcomm in promoting AI capabilities. As “AI” becomes the buzzword akin to the past fascination with the “metaverse,” executives and investors are keen to leverage AI for sales and stock value enhancement.

Certainly, AI heavily relies on Neural Processing Units (NPUs) integrated into chips like Intel’s Core Ultra and AMD’s Ryzen 8000 series. Qualcomm’s Snapdragon X Elite also follows suit. However, the current narrative positioning NPUs as the primary engine for on-chip AI might be an oversimplification. In reality, traditional computational powerhouses, such as the CPU and especially the GPU, play more substantial roles in AI computations.

Benchmarking AI performance remains a challenge due to the diverse nature of AI tasks, including image generation, large-language models (LLMs), and application-specific enhancements. To gain a reality check on the significance of NPUs in AI calculations, we turn to UL’s Procyon AI inferencing benchmark, focusing on the Intel Core Ultra chip, specifically the Meteor Lake variant.

Key Benchmarking Insights:

  1. NPU Contribution: The NPU within the Core Ultra demonstrates a notable impact on AI performance, surpassing the CPU’s efficiency by 82%. However, the GPU outperforms the NPU by 55%, showcasing its substantial role in AI computations.
  2. Procyon Test Results:
    • Procyon (OpenVINO) NPU: 356
    • Procyon (OpenVINO) GPU: 552
    • Procyon (OpenVINO) CPU: 196
  3. Role of CPU and GPU: The GPU’s AI performance is 182% of the CPU, emphasizing that a powerful graphics card or GPU is a more impactful investment for AI enthusiasts.
  4. AI on CPU and GPU: The benchmark highlights that AI applications can run on both the CPU and GPU without the need for a dedicated AI logic block. The effectiveness of different blocks becomes evident through testing.

Intel claims that the NPU’s efficiency translates to prolonged battery life, addressing real-world scenarios like AI-enhanced features during lengthy Microsoft Teams calls, where noise filtering and background activity recognition are essential.

As AI evolves, the role of NPUs may become more prominent. Intel’s promise of a threefold increase in NPU performance with the upcoming Lunar Lake chip indicates a potential shift. However, for consumers, the current landscape suggests that NPUs, while beneficial, are not indispensable. The overall AI experience on a PC involves a synergy between CPU, GPU, and NPU, with the latter being just one component of the holistic solution.

In conclusion, while AI PCs and NPUs are significant developments in 2024, consumers may not prioritize the presence of on-chip AI over cloud-based alternatives. As the year progresses, advancements in AI performance may reshape the landscape, but for now, a balanced understanding of the role each component plays is essential.