×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

In a GitHub post, Andrej Karpathy explains how he and a team were able to successfully reproduce the full 1558M parameter version of GPT-2 using llm.c, training it on a single 8XH100 node for 24 hours at a cost of $672. This demonstrates the dramatic improvements in compute, software, and data that have made reproducing large language models much more feasible in the 5 years since GPT-2 was originally introduced.

Key Takeaways:

  • The trained model performs qualitatively similarly to the original GPT-2 on prompts, generating coherent and relevant continuations. On the HellaSwag eval, it matches GPT-2 performance around 25K steps into training.
  • llm.c enables efficient, minimalist training of large language models directly in C/CUDA, without relying on Python or complex deep learning libraries. The full codebase is only around 5,000 lines.
  • Detailed instructions are provided for reproducing the GPT-2 training run, including downloading the training data, compiling the code, and launching training with the used hyperparameters.

Caveats and Ongoing Work:

  • The model training is not yet fully stabilized, with some loss spikes and bad activation ranges cropping up later in training. More work is needed on initialization, activation ranges, and overall training stability.
  • The model has not yet been comprehensively evaluated on tasks like math, code, and multilingual data. The current evals focus mainly on English language coherence.
  • Key focus areas for future llm.c development include further optimizing training hyperparameters, improving stability and scalability, enabling lower precision training (eg fp8), implementing fast inference, and extending to more modern architectures.

Contributors and Compute:

  • In addition to the author, substantial contributions to llm.c development have come from @ngc92, @ademeure, @gordicaleksa, and @rosslwheeler.
  • Lambda Labs sponsored the GPUs used for development. NVIDIA and Ubicloud provided GitHub Actions GPU runners for CI.

Wrapping Up:

The successful reproduction of GPT-2 in llm.c marks a milestone in the democratization of large language model development. With a clean C/CUDA codebase that enables efficient training even on modest GPU setups, llm.c is poised to make building LLMs accessible to a much wider audience. However, challenges remain in stabilizing training at scale and extending support to cover the full range of model architectures and domains. The llm.c core dev team is actively tackling these problems with the ultimate goal of enabling anyone to easily train state-of-the-art language models and conversational agents.

Let's reproduce GPT-2 (1.6B): one 8XH100 node, 24 hours, $672, in llm.c · karpathy/llm.c · Discussion #677

Recent News

New YouTube Feature Lets You AI-Generate Thumbnails for Playlists

The new feature automates playlist thumbnail creation while limiting user customization options to preset AI-generated themes.

This AI-Powered Social Network Eliminates Human Interaction

A new Twitter-like platform replaces human interactions with AI chatbots, aiming to reduce social media anxiety.

Library of Congress Is a Go-To Data Source for Companies Training AI Models

The Library's vast digital archives attract AI companies seeking diverse, copyright-free data to train language models.