×
Retrofuturism in action: Engineer runs Meta’s Llama 2 AI on a 2005 PowerBook G4
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Running generative AI typically requires modern, powerful hardware, but a software engineer has successfully demonstrated these models can operate on far more modest systems. Andrew Rossignol recently managed to run Meta’s Llama 2 large language model on a PowerBook G4 from 2005 – a device containing only a 1.5GHz PowerPC G4 processor and 1GB of RAM. This achievement highlights the potential for AI to become more accessible across a wider range of computing devices, including those considered obsolete by today’s standards.

How he did it: Rossignol successfully ported the open-source llama2.c project to run on the two-decade-old laptop hardware.

  • He significantly improved performance by leveraging AltiVec, a PowerPC vector extension that helped accelerate the model’s inference capabilities.
  • The full technical implementation details are available in Rossignol’s blog post, where he documents the entire process.

The broader context: This PowerBook experiment joins other examples of AI models running on older consumer electronics.

  • Similar demonstrations have been achieved on discontinued gaming consoles like the PlayStation 3 and Xbox 360.
  • These projects challenge assumptions about the minimum hardware requirements needed to run generative AI systems.

Why this matters: The ability to run AI on older hardware could democratize access to these technologies beyond those with the latest equipment.

  • As optimization techniques improve, AI capabilities may become available to users with limited computing resources or in regions where the newest hardware is cost-prohibitive.
  • These experiments help push the boundaries of what’s possible with AI deployment on resource-constrained devices.
Software Engineer Runs Generative AI on 20-Year-Old PowerBook G4

Recent News

Smaller AI models slash enterprise costs by up to 100X

Task-specific fine-tuning allows compact models to compete with flagship LLMs for particular use cases like summarization.

Psychologist exposes adoption assumption and other fallacies in pro-AI education debates

The calculator comparison fails because AI can bypass conceptual understanding entirely.

Job alert: Y Combinator-backed Spark seeks engineer for $15B clean energy AI tools

AI agents will automatically navigate regulatory websites like human browsers.