×
AI Pioneer Warns of Secretive LLMs, Advocates for User-Owned Alternative
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Illia Polosukhin, a key contributor to the development of transformers, is concerned about the secretive and profit-driven nature of large language models (LLMs) and aims to create an open source, user-owned AI model to ensure transparency and accountability.

Key concerns with current LLMs: Polosukhin believes that the lack of transparency in LLMs, even from companies founded on openness, poses risks as the technology improves:

  • The data used to train models and the model weights are often unknown, making it difficult to assess potential biases and decision-making processes.
  • As models become more sophisticated, they may be better at manipulating people and generating revenue for the companies that control them.

Limitations of regulation: Polosukhin has little faith in the ability of regulators to effectively oversee and limit the development of LLMs:

  • The complexity of the models makes it challenging for regulators to assess safety margins and parameters, often requiring them to rely on the companies themselves for guidance.
  • Larger companies are adept at influencing regulatory bodies, potentially leading to a situation where “the watchers are the watchees.”

The case for user-owned AI: As an alternative, Polosukhin proposes an open source, decentralized model with a neutral platform that aligns incentives and allows for community ownership:

  • Developers are already using Polosukhin’s Near Foundation platform to create applications that could work on this open source model, with an incubation program in place to support startups in the effort.
  • A promising application is a system for distributing micropayments to creators whose content feeds AI models, addressing intellectual property concerns.

Challenges and concerns: Implementing a user-owned AI model faces several obstacles:

  • Funding the development of a sophisticated foundation model from scratch remains a significant challenge, with no clear source of investment identified.
  • The potential for bad actors to abuse openly accessible powerful models is a persistent concern, although Polosukhin argues that open systems are not inherently worse than the current situation.

The urgency of action: Both Polosukhin and his collaborator, Jacob Uszkoreit, believe that if user-owned AI does not emerge before the development of artificial general intelligence, the consequences could be disastrous:

  • If a single corporation or a small group of companies control a “money-printing machine” in the form of self-improving AI, it could create a zero-sum game that destabilizes the economy and concentrates power in the hands of a few.

Reflection on the transformers breakthrough: Despite the potential risks associated with the advancement of AI, Polosukhin does not regret his role in the development of transformers, believing that the breakthrough would have occurred regardless of his involvement and that user-owned AI can help level the playing field and mitigate risks.

He Helped Invent Generative AI. Now He Wants to Save It

Recent News

Tech’s biggest winners and losers of 2024

Major tech companies emerged from 2024 with vastly different outcomes as AI integration and market volatility created clear winners and losers in revenue and user adoption.

How to achieve AI transformation that results in business success

Organizations are leveraging their existing technology investments to build AI capabilities, rather than embarking on entirely new digital transformations.

San Rafael schools to create AI advisory panel

Bay Area educators aim to create balanced AI guidelines by gathering input from teachers, parents and students through a new district-wide committee.