×
Researcher Reproduces GPT-2 Using C/CUDA, Making LLM Training More Accessible
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

In a GitHub post, Andrej Karpathy explains how he and a team were able to successfully reproduce the full 1558M parameter version of GPT-2 using llm.c, training it on a single 8XH100 node for 24 hours at a cost of $672. This demonstrates the dramatic improvements in compute, software, and data that have made reproducing large language models much more feasible in the 5 years since GPT-2 was originally introduced.

Key Takeaways:

  • The trained model performs qualitatively similarly to the original GPT-2 on prompts, generating coherent and relevant continuations. On the HellaSwag eval, it matches GPT-2 performance around 25K steps into training.
  • llm.c enables efficient, minimalist training of large language models directly in C/CUDA, without relying on Python or complex deep learning libraries. The full codebase is only around 5,000 lines.
  • Detailed instructions are provided for reproducing the GPT-2 training run, including downloading the training data, compiling the code, and launching training with the used hyperparameters.

Caveats and Ongoing Work:

  • The model training is not yet fully stabilized, with some loss spikes and bad activation ranges cropping up later in training. More work is needed on initialization, activation ranges, and overall training stability.
  • The model has not yet been comprehensively evaluated on tasks like math, code, and multilingual data. The current evals focus mainly on English language coherence.
  • Key focus areas for future llm.c development include further optimizing training hyperparameters, improving stability and scalability, enabling lower precision training (eg fp8), implementing fast inference, and extending to more modern architectures.

Contributors and Compute:

  • In addition to the author, substantial contributions to llm.c development have come from @ngc92, @ademeure, @gordicaleksa, and @rosslwheeler.
  • Lambda Labs sponsored the GPUs used for development. NVIDIA and Ubicloud provided GitHub Actions GPU runners for CI.

Wrapping Up:

The successful reproduction of GPT-2 in llm.c marks a milestone in the democratization of large language model development. With a clean C/CUDA codebase that enables efficient training even on modest GPU setups, llm.c is poised to make building LLMs accessible to a much wider audience. However, challenges remain in stabilizing training at scale and extending support to cover the full range of model architectures and domains. The llm.c core dev team is actively tackling these problems with the ultimate goal of enabling anyone to easily train state-of-the-art language models and conversational agents.

Let's reproduce GPT-2 (1.6B): one 8XH100 node, 24 hours, $672, in llm.c · karpathy/llm.c · Discussion #677

Recent News

AI data center powerhouse attracts attention from Jim Cramer’s Charitable Trust

GE Vernova sees rising investment as power demand surges from AI data centers and global electrification needs.

Notion unveils comprehensive AI toolkit to boost productivity

The productivity software company integrates suite-wide AI tools like meeting transcription and cross-platform search at a lower cost than standalone alternatives.

AI-powered crypto trading bots still face major hurdles

AI trading bots can be tricked into redirecting cryptocurrency payments through simple text inputs that implant false memories in their systems.