×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

In a GitHub post, Andrej Karpathy explains how he and a team were able to successfully reproduce the full 1558M parameter version of GPT-2 using llm.c, training it on a single 8XH100 node for 24 hours at a cost of $672. This demonstrates the dramatic improvements in compute, software, and data that have made reproducing large language models much more feasible in the 5 years since GPT-2 was originally introduced.

Key Takeaways:

  • The trained model performs qualitatively similarly to the original GPT-2 on prompts, generating coherent and relevant continuations. On the HellaSwag eval, it matches GPT-2 performance around 25K steps into training.
  • llm.c enables efficient, minimalist training of large language models directly in C/CUDA, without relying on Python or complex deep learning libraries. The full codebase is only around 5,000 lines.
  • Detailed instructions are provided for reproducing the GPT-2 training run, including downloading the training data, compiling the code, and launching training with the used hyperparameters.

Caveats and Ongoing Work:

  • The model training is not yet fully stabilized, with some loss spikes and bad activation ranges cropping up later in training. More work is needed on initialization, activation ranges, and overall training stability.
  • The model has not yet been comprehensively evaluated on tasks like math, code, and multilingual data. The current evals focus mainly on English language coherence.
  • Key focus areas for future llm.c development include further optimizing training hyperparameters, improving stability and scalability, enabling lower precision training (eg fp8), implementing fast inference, and extending to more modern architectures.

Contributors and Compute:

  • In addition to the author, substantial contributions to llm.c development have come from @ngc92, @ademeure, @gordicaleksa, and @rosslwheeler.
  • Lambda Labs sponsored the GPUs used for development. NVIDIA and Ubicloud provided GitHub Actions GPU runners for CI.

Wrapping Up:

The successful reproduction of GPT-2 in llm.c marks a milestone in the democratization of large language model development. With a clean C/CUDA codebase that enables efficient training even on modest GPU setups, llm.c is poised to make building LLMs accessible to a much wider audience. However, challenges remain in stabilizing training at scale and extending support to cover the full range of model architectures and domains. The llm.c core dev team is actively tackling these problems with the ultimate goal of enabling anyone to easily train state-of-the-art language models and conversational agents.

Let's reproduce GPT-2 (1.6B): one 8XH100 node, 24 hours, $672, in llm.c · karpathy/llm.c · Discussion #677

Recent News

Rep.ai Raises $7.5M to Launch ‘Digital Twin’ Sales Reps

The startup's AI avatars aim to provide personalized video interactions with customers, bridging the gap between chatbots and human representatives.

LG Launches Alliance Program to Connect Startups with Strategic Partners

The program aims to foster collaboration between corporations and startups, accelerating the development of new technologies across industries.

Hollywood Giant Lionsgate to Provide Library to Runway for AI Training

The partnership aims to create an AI model using Lionsgate's library, offering new tools for filmmakers while addressing legal concerns about training data.