×
How LLMs on the ‘Edge’ Could Solve the AI Data Center Problem
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The rise of edge AI: LLMs on the edge, which enable AI systems to run natively on personal devices, are emerging as a potential solution to ease the strain on data centers caused by the increasing power demands of artificial intelligence.

Current state of edge AI adoption: While edge AI shows promise, its widespread implementation is still in the early stages and faces several challenges.

  • Smartphone manufacturers are working to reduce LLM parameters from 175 billion (as in ChatGPT-3) to around 2 billion for edge devices.
  • Early deployments of edge AI are likely to focus on scenarios where errors and ‘hallucinations‘ have lower stakes, such as recommendation engines and AI-powered internet searches.
  • The adoption of AI-enabled devices may be slowed initially by higher prices, but costs are expected to decrease as adoption increases.

Impact on data centers: The shift towards edge AI is unlikely to immediately alleviate the strain on data centers, but it could lead to changes in the long term.

  • In the near future, models running on the edge will still require training in data centers, maintaining the current high demand for AI processing power.
  • However, the development of smaller, more focused LLMs is on the rise, with Gartner predicting that by 2027, over 50% of enterprise GenAI models will be industry or function-specific.
  • As edge LLMs gain momentum, they promise to reduce the amount of AI processing needed in centralized data centers, potentially easing the strain on these facilities.

Security and privacy benefits: LLMs on the edge offer significant advantages in terms of data protection and user privacy.

  • Edge AI can help mitigate privacy concerns associated with cloud-based LLMs by processing sensitive information locally on devices.
  • This approach reduces the risk of exposing personally identifiable information, healthcare data, and corporate secrets to potential cybersecurity breaches.
  • The move towards smaller LLMs that can be contained within enterprise data centers or run on local devices addresses many of the ongoing security and privacy concerns posed by broad usage of cloud-based LLMs.

Timeline for widespread adoption: The implementation of LLMs on the edge is expected to be a gradual process, with certain industries and use cases leading the way.

  • Forrester’s research indicates that 67% of infrastructure hardware decision-makers have already adopted edge intelligence or are in the process of doing so.
  • Widespread consumer and business use of edge LLMs is expected to take 2-3 years, as hardware prices need to decrease and adoption ramps up.
  • Early adopters are likely to be industries with field operations, such as utilities, mining, and transportation maintenance, where the business value justifies the cost of LLM-capable devices.
  • More advanced use cases, like immersive retail and autonomous vehicles, may take five years or more to implement edge LLMs effectively.

Technological advancements enabling edge AI: Several developments are making LLMs on the edge more feasible and efficient.

  • Developers are working on pruning models to reduce the number of parameters, making them more manageable for edge devices.
  • Efforts are being made to shift GenAI models from GPUs to CPUs, reducing the processing footprint.
  • The development of standards for compiling is also contributing to the advancement of edge AI capabilities.

Broader implications: The transition to edge AI represents a significant shift in the AI landscape, with potential far-reaching effects on technology infrastructure and user experience.

  • As edge AI matures, it could lead to a more distributed AI ecosystem, reducing reliance on centralized cloud services and potentially democratizing access to AI capabilities.
  • The development of specialized LLMs for specific industries and business processes is likely to precede widespread personal device adoption, paving the way for more tailored and efficient AI solutions.
  • While edge AI promises numerous benefits, it also presents new challenges in terms of device management, software updates, and ensuring consistent performance across a wide range of hardware configurations.
How LLMs on the Edge Could Help Solve the AI Data Center Problem

Recent News

MIT researchers develop novel method to train dependable AI agents

Breakthrough algorithm reduces AI training costs by enabling systems to learn effectively with a fraction of the usual data requirements.

Samsung’s Gauss 2 AI model is the new brain of Galaxy devices

Samsung's new Gauss 2 AI system processes data locally on devices, marking a shift away from cloud-dependent artificial intelligence in consumer electronics.

AMD is developing an open-source software platform for AI development

AMD's open-source AI initiative aims to help developers build applications that can run on any manufacturer's chips, breaking away from hardware-specific development tools.