×
AI Vision Models Fail Basic Tests, Highlighting Significant Capability Gaps
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

State-of-the-art AI models struggle with basic visual reasoning tasks that are trivial for humans, highlighting significant gaps in their capabilities:

Key findings: Researchers tested four top-level AI vision models on simple visual analysis tasks and found that they often fall well short of human-level performance:

  • The models struggled with tasks such as counting rows and columns in a blank grid, identifying circled letters in a word, and counting nested shapes.
  • Small changes to the tasks, like increasing the number of overlapping circles, led to significant drops in accuracy, suggesting the models are biased towards familiar patterns they were trained on.
  • In some cases, the models provided nonsensical answers, like guessing “9” or “©” as a circled letter in a word.

Implications for AI development: The results highlight the limitations of current AI models when it comes to low-level abstract visual reasoning:

  • The models’ inability to generalize beyond the content they were trained on may be a key factor in their poor performance on these simple tasks.
  • Fine-tuning a model using specific images from one of the tests only led to modest improvements, indicating that the models struggle to generalize even with additional training.
  • The researchers suggest that the “late fusion” approach of adding vision encoders onto pre-trained language models may contribute to these capability gaps, and propose that an “early fusion” approach integrating visual encoding alongside language training could lead to better results.

Broader context: The findings are reminiscent of similar capability gaps seen in state-of-the-art language models:

  • Like vision models, language models can perform well on high-level tasks like summarizing lengthy texts, but often fail at basic math and spelling questions.
  • These gaps underscore the need for users to be highly skeptical of the results provided by generative AI models, as their accuracy can vary greatly depending on the specific task.

Looking ahead: The current limitations of AI models in visual reasoning raise questions about their practical applications and the challenges in addressing these shortcomings:

  • With accuracy rates well below 99% on simple tasks, the practical utility of these models may be limited to creative applications where inaccuracy can be tolerated.
  • Unlike humans, who can be easily course-corrected to prevent future mistakes, the “root cause” of errors in AI models is often difficult to identify and address, making it challenging to ensure future errors won’t occur.
  • The researchers’ findings suggest that significant advancements in AI training approaches may be needed to close the capability gaps highlighted by these basic visual reasoning tests.
Can you do better than top-level AI models on these basic vision tests?

Recent News

MIT research evaluates driver behavior to advance autonomous driving tech

Researchers find driver trust and behavior patterns are more critical to autonomous vehicle adoption than technical capabilities, with acceptance levels showing first uptick in years.

Inside Microsoft’s plan to ensure every business has an AI Agent

Microsoft's shift toward AI assistants marks its largest interface change since the introduction of Windows, as the company integrates automated helpers across its entire software ecosystem.

Chinese AI model LLaVA-o1 rivals OpenAI’s o1 in new study

New open-source AI model from China matches Silicon Valley's best at visual reasoning tasks while making its code freely available to researchers.