×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Illia Polosukhin, a key contributor to the development of transformers, is concerned about the secretive and profit-driven nature of large language models (LLMs) and aims to create an open source, user-owned AI model to ensure transparency and accountability.

Key concerns with current LLMs: Polosukhin believes that the lack of transparency in LLMs, even from companies founded on openness, poses risks as the technology improves:

  • The data used to train models and the model weights are often unknown, making it difficult to assess potential biases and decision-making processes.
  • As models become more sophisticated, they may be better at manipulating people and generating revenue for the companies that control them.

Limitations of regulation: Polosukhin has little faith in the ability of regulators to effectively oversee and limit the development of LLMs:

  • The complexity of the models makes it challenging for regulators to assess safety margins and parameters, often requiring them to rely on the companies themselves for guidance.
  • Larger companies are adept at influencing regulatory bodies, potentially leading to a situation where “the watchers are the watchees.”

The case for user-owned AI: As an alternative, Polosukhin proposes an open source, decentralized model with a neutral platform that aligns incentives and allows for community ownership:

  • Developers are already using Polosukhin’s Near Foundation platform to create applications that could work on this open source model, with an incubation program in place to support startups in the effort.
  • A promising application is a system for distributing micropayments to creators whose content feeds AI models, addressing intellectual property concerns.

Challenges and concerns: Implementing a user-owned AI model faces several obstacles:

  • Funding the development of a sophisticated foundation model from scratch remains a significant challenge, with no clear source of investment identified.
  • The potential for bad actors to abuse openly accessible powerful models is a persistent concern, although Polosukhin argues that open systems are not inherently worse than the current situation.

The urgency of action: Both Polosukhin and his collaborator, Jacob Uszkoreit, believe that if user-owned AI does not emerge before the development of artificial general intelligence, the consequences could be disastrous:

  • If a single corporation or a small group of companies control a “money-printing machine” in the form of self-improving AI, it could create a zero-sum game that destabilizes the economy and concentrates power in the hands of a few.

Reflection on the transformers breakthrough: Despite the potential risks associated with the advancement of AI, Polosukhin does not regret his role in the development of transformers, believing that the breakthrough would have occurred regardless of his involvement and that user-owned AI can help level the playing field and mitigate risks.

He Helped Invent Generative AI. Now He Wants to Save It

Recent News

AI Anchors are Protecting Venezuelan Journalists from Government Crackdowns

Venezuelan news outlets deploy AI-generated anchors to protect human journalists from government retaliation while disseminating news via social media.

How AI and Robotics are Being Integrated into Sex Tech

The integration of AI and robotics into sexual experiences raises questions about the future of human intimacy and relationships.

63% of Brands Now Embrace Gen AI in Marketing, Research Shows

Marketers embrace generative AI despite legal and ethical concerns, with 63% of brands already using the technology in their campaigns.