×
AI Vision Models Fail Basic Tests, Highlighting Significant Capability Gaps
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

State-of-the-art AI models struggle with basic visual reasoning tasks that are trivial for humans, highlighting significant gaps in their capabilities:

Key findings: Researchers tested four top-level AI vision models on simple visual analysis tasks and found that they often fall well short of human-level performance:

  • The models struggled with tasks such as counting rows and columns in a blank grid, identifying circled letters in a word, and counting nested shapes.
  • Small changes to the tasks, like increasing the number of overlapping circles, led to significant drops in accuracy, suggesting the models are biased towards familiar patterns they were trained on.
  • In some cases, the models provided nonsensical answers, like guessing “9” or “©” as a circled letter in a word.

Implications for AI development: The results highlight the limitations of current AI models when it comes to low-level abstract visual reasoning:

  • The models’ inability to generalize beyond the content they were trained on may be a key factor in their poor performance on these simple tasks.
  • Fine-tuning a model using specific images from one of the tests only led to modest improvements, indicating that the models struggle to generalize even with additional training.
  • The researchers suggest that the “late fusion” approach of adding vision encoders onto pre-trained language models may contribute to these capability gaps, and propose that an “early fusion” approach integrating visual encoding alongside language training could lead to better results.

Broader context: The findings are reminiscent of similar capability gaps seen in state-of-the-art language models:

  • Like vision models, language models can perform well on high-level tasks like summarizing lengthy texts, but often fail at basic math and spelling questions.
  • These gaps underscore the need for users to be highly skeptical of the results provided by generative AI models, as their accuracy can vary greatly depending on the specific task.

Looking ahead: The current limitations of AI models in visual reasoning raise questions about their practical applications and the challenges in addressing these shortcomings:

  • With accuracy rates well below 99% on simple tasks, the practical utility of these models may be limited to creative applications where inaccuracy can be tolerated.
  • Unlike humans, who can be easily course-corrected to prevent future mistakes, the “root cause” of errors in AI models is often difficult to identify and address, making it challenging to ensure future errors won’t occur.
  • The researchers’ findings suggest that significant advancements in AI training approaches may be needed to close the capability gaps highlighted by these basic visual reasoning tests.
Can you do better than top-level AI models on these basic vision tests?

Recent News

Trump pledges to reverse Biden’s AI policies amid global safety talks

Trump's vow to dismantle AI safeguards collides with the tech industry's growing acceptance of federal oversight and international safety standards.

AI predicts behavior of 1000 people in simulation study

Stanford researchers demonstrate AI models can now accurately mimic human decision-making patterns across large populations, marking a significant shift from traditional survey methods.

Strava limits third-party access to user fitness data

Popular workout-tracking platform restricts third-party access to user data, forcing fitness apps to find alternative data sources or scale back social features.