×
AI critics who downplay or ignore job displacement fears are stuck in 2023, argues top tech journalist
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Vox tech journalist Kelsey Piper argues that academic critics are missing the point about AI’s real-world impact by focusing too heavily on whether AI can “truly reason.” While philosophers and linguists debate AI’s cognitive limitations, the technology is already beginning to displace workers across industries, making theoretical questions about reasoning increasingly irrelevant to practical concerns about employment and economic disruption.

What you should know: Recent academic papers claiming AI lacks genuine reasoning abilities are gaining widespread attention, but they may be distracting from more pressing concerns about AI’s actual capabilities.

  • A viral Apple paper argued AI models have “fundamental scaling limitations” in reasoning, but Piper notes the models failed primarily due to formatting requirements rather than inability to solve problems.
  • When asked to write programs that output correct answers, the AI models “do so effortlessly,” suggesting the limitation is in expression format rather than problem-solving ability.
  • The paper’s findings are “not surprising” and don’t actually demonstrate much about AI’s practical capabilities, according to Piper’s analysis.

The employment reality: AI is already beginning to impact job markets in measurable ways, regardless of philosophical debates about its reasoning capabilities.

  • Entry-level hiring in professions like law, where tasks are “AI-automatable,” appears to be contracting.
  • The job market for recent college graduates “looks ugly” as AI tools become more capable.
  • Piper regularly tests AI tools on her own newsletter writing and notes they’re “getting better all the time,” leading her to expect her job will be automated “in the next few years.”

Why academic criticism matters: Piper argues that academics are making themselves irrelevant when their expertise is most needed by clinging to outdated assessments of AI capabilities.

  • “Many [academics] dislike AI, so they don’t follow it closely,” Cambridge professor Harry Law observes. “They don’t follow it closely so they still think that the criticisms of 2023 hold water. They don’t.”
  • Critics are “often trapped in 2023, giving accounts of what AI can and cannot do that haven’t been correct for two years.”
  • This disconnect prevents academics from making “important contributions” to understanding AI’s real implications for society.

The bigger picture: Cambridge’s Harry Law emphasizes that AI’s practical impact doesn’t depend on philosophical questions about consciousness or reasoning.

  • “Whether or not they are simulating thinking has no bearing on whether or not the machines are capable of rearranging the world for better or worse,” Law argues.
  • For employment effects and potential risks, “what matters isn’t whether AIs can be induced to make silly mistakes, but what they can do when set up for success.”
  • The focus should shift from theoretical limitations to practical capabilities and their societal implications.
AI doesn’t have to reason to take your job

Recent News

Google adds song identification to Gemini but kicks users out of the app

Google's app-switching implementation feels clunky compared to Assistant's seamless song identification.

Maryland partners with Google to offer free AI career certificates to students

Free Google Career Certificates include new AI courses alongside traditional tech skills.

Stanford study: Workers want AI for boring tasks, not complex work

Workers want to keep control over meaningful tasks while automating the mundane stuff.