×
AI Vision Models Fail Basic Tests, Highlighting Significant Capability Gaps
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

State-of-the-art AI models struggle with basic visual reasoning tasks that are trivial for humans, highlighting significant gaps in their capabilities:

Key findings: Researchers tested four top-level AI vision models on simple visual analysis tasks and found that they often fall well short of human-level performance:

  • The models struggled with tasks such as counting rows and columns in a blank grid, identifying circled letters in a word, and counting nested shapes.
  • Small changes to the tasks, like increasing the number of overlapping circles, led to significant drops in accuracy, suggesting the models are biased towards familiar patterns they were trained on.
  • In some cases, the models provided nonsensical answers, like guessing “9” or “©” as a circled letter in a word.

Implications for AI development: The results highlight the limitations of current AI models when it comes to low-level abstract visual reasoning:

  • The models’ inability to generalize beyond the content they were trained on may be a key factor in their poor performance on these simple tasks.
  • Fine-tuning a model using specific images from one of the tests only led to modest improvements, indicating that the models struggle to generalize even with additional training.
  • The researchers suggest that the “late fusion” approach of adding vision encoders onto pre-trained language models may contribute to these capability gaps, and propose that an “early fusion” approach integrating visual encoding alongside language training could lead to better results.

Broader context: The findings are reminiscent of similar capability gaps seen in state-of-the-art language models:

  • Like vision models, language models can perform well on high-level tasks like summarizing lengthy texts, but often fail at basic math and spelling questions.
  • These gaps underscore the need for users to be highly skeptical of the results provided by generative AI models, as their accuracy can vary greatly depending on the specific task.

Looking ahead: The current limitations of AI models in visual reasoning raise questions about their practical applications and the challenges in addressing these shortcomings:

  • With accuracy rates well below 99% on simple tasks, the practical utility of these models may be limited to creative applications where inaccuracy can be tolerated.
  • Unlike humans, who can be easily course-corrected to prevent future mistakes, the “root cause” of errors in AI models is often difficult to identify and address, making it challenging to ensure future errors won’t occur.
  • The researchers’ findings suggest that significant advancements in AI training approaches may be needed to close the capability gaps highlighted by these basic visual reasoning tests.
Can you do better than top-level AI models on these basic vision tests?

Recent News

Reporters Without Borders calls on Apple to remove its AI news summaries

Gaming division leaders departed amid redundant roles following Microsoft's record Activision Blizzard merger.

Drinking water, not fossil fuel: Why AI training data isn’t like oil

The shift away from "data scarcity" concerns reflects a growing understanding that AI's key bottleneck lies in data quality and processing, not raw quantity.

Apple’s AI strategy won’t include charging users, says Tim Cook

Apple's strategy to include AI features at no additional cost contrasts with rivals who seek direct revenue from artificial intelligence services.