×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

State-of-the-art AI models struggle with basic visual reasoning tasks that are trivial for humans, highlighting significant gaps in their capabilities:

Key findings: Researchers tested four top-level AI vision models on simple visual analysis tasks and found that they often fall well short of human-level performance:

  • The models struggled with tasks such as counting rows and columns in a blank grid, identifying circled letters in a word, and counting nested shapes.
  • Small changes to the tasks, like increasing the number of overlapping circles, led to significant drops in accuracy, suggesting the models are biased towards familiar patterns they were trained on.
  • In some cases, the models provided nonsensical answers, like guessing “9” or “©” as a circled letter in a word.

Implications for AI development: The results highlight the limitations of current AI models when it comes to low-level abstract visual reasoning:

  • The models’ inability to generalize beyond the content they were trained on may be a key factor in their poor performance on these simple tasks.
  • Fine-tuning a model using specific images from one of the tests only led to modest improvements, indicating that the models struggle to generalize even with additional training.
  • The researchers suggest that the “late fusion” approach of adding vision encoders onto pre-trained language models may contribute to these capability gaps, and propose that an “early fusion” approach integrating visual encoding alongside language training could lead to better results.

Broader context: The findings are reminiscent of similar capability gaps seen in state-of-the-art language models:

  • Like vision models, language models can perform well on high-level tasks like summarizing lengthy texts, but often fail at basic math and spelling questions.
  • These gaps underscore the need for users to be highly skeptical of the results provided by generative AI models, as their accuracy can vary greatly depending on the specific task.

Looking ahead: The current limitations of AI models in visual reasoning raise questions about their practical applications and the challenges in addressing these shortcomings:

  • With accuracy rates well below 99% on simple tasks, the practical utility of these models may be limited to creative applications where inaccuracy can be tolerated.
  • Unlike humans, who can be easily course-corrected to prevent future mistakes, the “root cause” of errors in AI models is often difficult to identify and address, making it challenging to ensure future errors won’t occur.
  • The researchers’ findings suggest that significant advancements in AI training approaches may be needed to close the capability gaps highlighted by these basic visual reasoning tests.
Can you do better than top-level AI models on these basic vision tests?

Recent News

71% of Investment Bankers Now Use ChatGPT, Survey Finds

Investment banks are increasingly adopting AI, with smaller firms leading the way and larger institutions seeing higher potential value per employee.

Scientists are Designing “Humanity’s Last Exam” to Assess Powerful AI

The unprecedented test aims to assess AI capabilities across diverse fields, from rocketry to philosophy, with experts submitting challenging questions beyond current benchmarks.

Hume Launches ‘EVI 2’ AI Voice Model with Emotional Responsiveness

The new AI voice model offers improved naturalness, faster response times, and customizable voices, potentially enhancing AI-human interactions across various industries.