×
AI Vision Models Fail Basic Tests, Highlighting Significant Capability Gaps
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

State-of-the-art AI models struggle with basic visual reasoning tasks that are trivial for humans, highlighting significant gaps in their capabilities:

Key findings: Researchers tested four top-level AI vision models on simple visual analysis tasks and found that they often fall well short of human-level performance:

  • The models struggled with tasks such as counting rows and columns in a blank grid, identifying circled letters in a word, and counting nested shapes.
  • Small changes to the tasks, like increasing the number of overlapping circles, led to significant drops in accuracy, suggesting the models are biased towards familiar patterns they were trained on.
  • In some cases, the models provided nonsensical answers, like guessing “9” or “©” as a circled letter in a word.

Implications for AI development: The results highlight the limitations of current AI models when it comes to low-level abstract visual reasoning:

  • The models’ inability to generalize beyond the content they were trained on may be a key factor in their poor performance on these simple tasks.
  • Fine-tuning a model using specific images from one of the tests only led to modest improvements, indicating that the models struggle to generalize even with additional training.
  • The researchers suggest that the “late fusion” approach of adding vision encoders onto pre-trained language models may contribute to these capability gaps, and propose that an “early fusion” approach integrating visual encoding alongside language training could lead to better results.

Broader context: The findings are reminiscent of similar capability gaps seen in state-of-the-art language models:

  • Like vision models, language models can perform well on high-level tasks like summarizing lengthy texts, but often fail at basic math and spelling questions.
  • These gaps underscore the need for users to be highly skeptical of the results provided by generative AI models, as their accuracy can vary greatly depending on the specific task.

Looking ahead: The current limitations of AI models in visual reasoning raise questions about their practical applications and the challenges in addressing these shortcomings:

  • With accuracy rates well below 99% on simple tasks, the practical utility of these models may be limited to creative applications where inaccuracy can be tolerated.
  • Unlike humans, who can be easily course-corrected to prevent future mistakes, the “root cause” of errors in AI models is often difficult to identify and address, making it challenging to ensure future errors won’t occur.
  • The researchers’ findings suggest that significant advancements in AI training approaches may be needed to close the capability gaps highlighted by these basic visual reasoning tests.
Can you do better than top-level AI models on these basic vision tests?

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.