×
Geekbench Has a New Benchmark to Evaluate Devices for AI Workloads
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

New benchmark for AI capabilities: Geekbench has introduced Geekbench AI, a cross-platform tool designed to evaluate device performance specifically for AI workloads across various hardware components and software frameworks.

  • The benchmark assesses the performance of CPUs, GPUs, and NPUs (Neural Processing Units) in handling machine learning applications.
  • It provides a comprehensive evaluation based on both accuracy and speed, offering insights into how well devices can execute AI tasks.
  • Geekbench AI supports multiple frameworks, including ONNX, CoreML, TensorFlow Lite, and OpenVINO, ensuring compatibility with a wide range of AI development environments.

Performance metrics and scoring: The tool offers a nuanced approach to measuring AI performance by providing three distinct scores and an accuracy assessment.

  • Users receive scores for full precision, half precision, and quantized workloads, reflecting different computational approaches in AI processing.
  • The accuracy measurement compares a workload’s outputs to expected results, providing a crucial metric for real-world AI application performance.
  • This multi-faceted scoring system allows for a more comprehensive understanding of a device’s AI capabilities beyond raw processing power.

Wide platform support: Geekbench AI is designed to be a versatile benchmarking solution across the technology ecosystem.

  • The tool is available for major desktop operating systems including Windows, macOS, and Linux.
  • Mobile platforms are also supported, with versions for Android and iOS devices.
  • This broad availability enables consistent performance comparisons across different device types and operating systems.

Evolution from previous versions: Geekbench AI represents an evolution of the company’s efforts in AI benchmarking.

  • The tool was previously known as Geekbench ML when it was in preview in 2021.
  • The rebranding and official launch suggest refinements and improvements based on feedback and testing from the preview period.

Implications for the tech industry: The introduction of Geekbench AI could have significant impacts on how AI performance is measured and compared across the industry.

  • This standardized benchmark may influence how manufacturers design and market their AI-capable hardware, potentially driving innovation in AI processing capabilities.
  • For consumers and businesses, the tool could provide valuable insights when making purchasing decisions for AI-intensive applications.
  • The benchmark’s focus on both speed and accuracy aligns with the growing importance of AI in real-world applications, where both factors are critical for successful deployment.
Geekbench has an AI benchmark now

Recent News

7 ways to optimize your business for ChatGPT recommendations

Companies must adapt their digital strategy with specific expertise, consistent information across platforms, and authoritative content to appear in AI-powered recommendation results.

Robin Williams’ daughter Zelda slams OpenAI’s Ghibli-style images amid artistic and ethical concerns

Robin Williams' daughter condemns OpenAI's AI-generated Ghibli-style images, highlighting both environmental costs and the contradiction with Miyazaki's well-documented opposition to artificial intelligence in creative work.

AI search tools provide wrong answers up to 60% of the time despite growing adoption

Independent testing reveals AI search tools frequently provide incorrect information, with error rates ranging from 37% to 94% across major platforms despite their growing popularity as Google alternatives.