Mutable.ai logoAuto Wiki by Mutable.ai

llama2.c

Auto-generated from karpathy/llama2.c by Mutable.ai Auto Wiki

llama2.c
GitHub Repository
Developerkarpathy
Written inC
Stars14k
Watchers 160
Created2023-07-23
Last updated2024-01-08
LicenseMIT
Repositorykarpathy/llama2.c
Auto Wiki
Generated at2024-01-08
Generated fromCommit d98620
Version0.0.4

This repository implements a Transformer-based language model for natural language generation. The core model architecture and training logic are defined in Python, while efficient C implementations handle inference for deploying the models.

The key functionality includes:

  • Defining the Transformer model architecture in Python, including components like the multi-headed self attention blocks and position-wise feedforward layers (model.py).

  • Implementing the training loop and optimization logic to pretrain language models (train.py), using techniques like mixed precision training, gradient accumulation, learning rate scheduling, and checkpointing.

  • Efficient C implementations for inference (run.c) that can run on CPU and embedded devices, using core data structures to cache activations. These provide building blocks to run models.

  • Tools for tokenization using SentencePiece (tokenizer.py), to encode text into integers for input/output from models.

  • Sampling logic (run.c) to generate text conditioned on a prompt, using techniques like nucleus sampling and temperature.

The key design choice is to separate model definition and training in Python from inference implementations in C. This allows rapidly iterating on model architecture, while still having performant deployment. The C inference code handles the math-heavy Transformer operations efficiently.

Details on specific model architectures, training configurations, and inference benchmarks can be found in the documentation (doc). The tokenization and sampling tools provide the necessary components for building text generation systems.

Model Architecture and Training

References: model.py, train.py

The model.py file defines the core Transformer model architecture for language modeling. It contains functionality for preprocessing the input embeddings, passing the input through each layer, and generating predictions.

The file also contains some utility functions for generating relative positional embeddings, and applying them to the inputs in the attention layers.

Normalization is handled by functionality in the file. Optimization is configured in methods. Generation is supported by methods.

It applies normalization, projects the input to queries/keys/values with linear layers, and computes the dot product attention.

It implements a position-wise feed-forward network, applying two linear transformations around a nonlinearity.

It wraps attention and feed-forward layers with additional normalization layers for residual connections.

It initializes embedding, normalization and output layers. It runs the input through each block and applies the final normalization. It handles loss computation and optimization configuration.

Inference

References: run.c

The C implementation in run.c focuses on efficiently running trained Transformer models for natural language processing tasks. It provides the core data structures needed to take a pre-trained model checkpoint and generate predictions.

The file defines structures for representing key components like the model hyperparameters, weights, and buffers needed for inference.

The code focuses on exposing clean C interfaces while efficiently computing representations using optimized linear algebra. This allows other applications to easily leverage a pre-trained LLama model with high performance.

Pre-trained Models

References: doc/stories260K.md

This section documents several pre-trained LLama models that are available for users to leverage in their applications. The main model documented is 'stories260K', which is a 260,000 parameter language model trained for the task of text generation.

The 'stories260K' model was trained using a Python script that specified various hyperparameters for the training loop, including a batch size, sequence length, learning rate, and others. It utilizes a custom tokenizer with 512 tokens to preprocess the input text. The model was trained for around 10 minutes on an Nvidia A100 GPU and achieved a validation loss of 1.2968.

Sampling from the trained 'stories260K' model can be done either using the C++ script run.c or by omitting the reference and sampling directly from the model parameters. Deterministic sampling at a temperature of 0.0 using run.c will generate basic stories that are coherent but limited given the provided prompt. Stochastic sampling at a temperature of 1.0 with a top-p of 0.9 produces more varied outputs that are still coherent.

Tokenization

References: doc/train_llama_tokenizer.md

This section details how to train sentencepiece tokenizers for LLama models. The file …/train_llama_tokenizer.md discusses Meta's approach to training their tokenizer. It shows the configuration used by printing a protobuf object.

The configuration defines important hyperparameters for training like the input file, model prefix, vocabulary size, number of threads. Meta uses an identity normalizer that doesn't modify the input text before training.

One limitation is that sentencepiece expects newline-delimited sentences rather than a single text block. This impacts how data can be preprocessed for training.