Mutable.ai logoAuto Wiki by Mutable.ai

fastai

Auto-generated from fastai/fastai by Mutable.ai Auto Wiki

fastai
GitHub Repository
Developerfastai
Written inJupyter Notebook
Stars25k
Watchers 620
Created2017-09-09
Last updated2024-01-06
LicenseApache License 2.0
Homepagedocs.fast.ai
Repositoryfastai/fastai
Auto Wiki
Generated at2024-01-06
Generated fromCommit 7723c4
Version0.0.4

The fastai repository provides a high-level deep learning library built on top of PyTorch. It aims to make deep learning more accessible and productive through its design, abstractions, and features.

Some of the key aspects of fastai include:

  • Provides domain-specific libraries for computer vision, natural language processing, tabular data, and collaborative filtering that handle common tasks like loading data and defining models Computer Vision Natural Language Processing Tabular Data

  • Implements a flexible callback system that allows injecting arbitrary code during model training like learning rate scheduling, mixed precision, and regularization Callbacks

  • Contains a training loop abstraction that handles optimization, losses, metrics, and other training mechanics in a consistent way Model Training

  • Provides utilities for loading, splitting, labeling, encoding, normalizing and transforming various types of data Data Loading and Preprocessing

  • Implements distributed training functionality to train models across multiple GPUs/machines Distributed Training

  • Supports mixed precision training using float16 to accelerate training on GPUs Mixed Precision

  • Contains tools for model interpretation, analysis and debugging like visualization and identifying top losses Interpretability

The key design choices are composing domain-specific libraries on top of a flexible core, providing both high-level abstractions while also allowing detailed customization via the callback system. The libraries build on PyTorch and leverage its capabilities.

Computer Vision

References: fastai/vision, dev_nbs/course, nbs/examples

The core functionality provided in the fastai library for computer vision allows training common CNN models on image data with a simplified and optimized training process. The …/vision package handles this end-to-end, from loading and preprocessing image datasets to defining models to a high-level training loop.

At the lowest level, image data is prepared for training by loading files from various sources and applying common transformations using classes like ImageBlock in …/data.py.

Common CNN architectures are implemented with classes such as models/xresnet.py. These compose layers to define the overall model structure.

Training is further optimized using techniques like mixed precision with callbacks that apply transformations like augment.py to images during training.

Example notebooks in …/examples demonstrate end-to-end usage of this functionality for tasks like classification and segmentation.

Data Loading

References: fastai/vision/data.py, dev_nbs/course

The main classes for loading and preprocessing image data are defined in …/data.py. The class provides methods for loading image data into PyTorch DataLoaders from various sources like folders, lists, and DataFrames. It handles details like preprocessing, normalization, and splitting data into training, validation, and test sets.

The subclasses represent different types of vision data like images and handle converting data to tensors and applying normalization.

Images are preprocessed before loading using functionality in …/crappify.py. This loads images, resizes them using PIL while preserving the minimum dimension, overlays random text at a random position and brightness also using PIL, and saves the processed image to disk with a random quality setting.

Models

References: fastai/vision/models, fastai/vision/models/__init__.py, fastai/vision/models/all.py

The …/models directory contains implementations of common computer vision models through well-defined classes.

The …/unet.py file defines several functions and classes related to implementing U-Net models using convolution and normalization layers as building blocks.

…/all.py provides a single entry point for these models by importing functionality from other files. …/__init__.py further exposes this functionality under one namespace without specifying submodules.

Data Augmentation

References: fastai/vision/augment.py

The …/augment.py file contains implementations of common image augmentation techniques that can be randomly applied during training. Core functionality is provided for randomly applying transforms. Classes inherit functionality for applying transforms to images.

Transforms can be randomly applied according to their probability. Multiple transforms are efficiently combined by storing lists of functions. This allows transforms to be intelligently composed.

Images can be modified in color spaces, providing a way to adjust properties like brightness and contrast.

Cropping and padding logic handles these operations for images and other data types. It supports padding images and has options to control cropping.

Natural Language Processing

References: fastai/text, dev_nbs

The …/text directory provides utilities for common natural language processing tasks like text classification and language modeling. It contains functionality for preprocessing text data, creating data loaders and defining neural network models for NLP problems.

The …/core.py module implements various text preprocessing utilities.

The …/data.py module defines functionality. It provides functionality for creating data loaders.

Neural network models for NLP are defined in …/models. Functions in …/core.py contain model construction functionality.

handles training models on text data.

re-exports functionality for the namespace.

Models

References: fastai/text/models/awdlstm.py, fastai/text/models/core.py

The …/models directory contains implementations of common neural networks for natural language processing tasks. The core implementation is defined in …/awdlstm.py.

…/awdlstm.py contains a class which defines the core AWD-LSTM architecture. It contains an embedding layer to process input tokens, followed by multiple LSTM layers to learn contextual representations of sequences.

The file also contains default hyperparameters and layer sizes for the language modeling and text classification splits of AWD-LSTM.

The …/core.py file contains classes and functions for constructing complete NLP models from encoder modules.

Training

References: fastai/text/learner.py

The training loop calculates losses using the model's predictions on batches of inputs. It applies the specified optimizer to minimize these losses over epochs. Callbacks can be added to customize training. For example, one callback implements a one-cycle learning rate schedule.

Tabular Data

References: fastai/tabular, nbs/examples

The fastai library provides a set of tools for building, training, and evaluating machine learning models on structured tabular data. The core functionality is centered around preprocessing tabular data stored in Pandas DataFrames, defining common tabular model architectures, and abstracting the training loop into a learner class tailored for tabular tasks.

The …/core.py file contains utilities for preprocessing tabular data.

Several preprocessing utilities are implemented, including wrappers for DataFrames, and reducing memory usage. The …/data.py file handles loading data from sources and creating data loaders after preprocessing. It can create test loaders from additional data and apply the same preprocessing.

Common tabular model architectures like linear regression, logistic regression and neural networks are defined in …/model.py.

The …/learner.py file combines a tabular model, dataset, and callbacks into a single object to fit models on tabular data.

Training Loop

References: fastai/tabular/learner.py, fastai/callback/all.py

The …/learner.py file handles training tabular models. It constructs data loaders from input data and passes batches to the model during optimization.

Training runs optimization for a number of epochs, iterating through the data loader and calculating losses at each step.

Data Loading and Preprocessing

References: fastai/data, fastai/text, fastai/tabular, fastai/vision

The fastai library provides extensive functionality for loading, preprocessing, and transforming various types of data for deep learning tasks. This functionality is implemented across several key modules and files in the library.

The …/data module contains core data functionality. The …/load.py file implements loading data from sources into PyTorch datasets and dataloaders. It supports batching, shuffling, and distributing work. The …/external.py file contains utilities for downloading external datasets from URLs in a consistent manner.

The …/transforms.py file implements preprocessing tasks like loading files from disk or dataframes, splitting datasets, labeling samples, mapping categorical variables, preparing regression targets, converting data types, and normalizing batches of images.

The …/block.py file contains classes and functions for building reusable data pipelines by combining preprocessing transforms. Subclasses provide defaults for specific data types.

For vision data, the …/data.py file handles loading image data. Subclasses handle converting specific types to tensors. Functions include preprocessing batches.

Data Loading

References: fastai/data/load.py, fastai/data/external.py

The core functionality for loading data from various sources into PyTorch datasets and dataloaders is handled by code in the …/load.py file. This file contains implementations of the main objects used for loading data.

Code directly loads data instead of using the PyTorch dataloader for small datasets that don't require true batching or shuffling.

Functions help assemble samples into batches and convert types when loading data. Errors are caught and raised clearly to help with debugging data loading issues.

External datasets can be loaded through utilities in the …/external.py file. This file defines utilities for downloading external datasets. It also contains functions for retrieving configuration settings and for downloading and extracting files.

Data Splits

References: fastai/data/load.py

The …/load.py file contains functionality for splitting data into training and validation sets when loading data. It supports passing a validation split ratio.

When a dataset is created, it can specify a validation split internally by dividing the data indices into train and validation subsets. The indices for each subset are stored in the dataset object. When iterating over the dataset, it subsets the appropriate indices based on whether it is in the train or validation phase. This allows easily loading different subsets during training and evaluation.

Labeling

References: fastai/data/load.py

The …/load.py file contains functionality for labeling and encoding targets as part of the data loading process. Errors during the labeling process are caught and informative errors are raised to help with debugging. The labeling functionality provides a consistent interface that works across different types of data and tasks.

Transforms

References: fastai/data/transforms.py, fastai/vision/augment.py

The …/transforms.py file contains utilities for loading, splitting, and transforming datasets.

The …/augment.py file contains implementations of common image augmentation techniques.

Pipelines

References: fastai/data/block.py

The file …/block.py contains classes and functions for building data pipelines from a data source.

Classes provide defaults tailored for specific data types.

Transforms can be combined while avoiding duplicates.

The pipeline processes each sample by applying the full set of transforms. This provides a reusable way to preprocess data into pytorch datasets.

Downloads

References: fastai/data/external.py, fastai/data/download_checks.py

The main functionality for downloading external datasets is handled in …/external.py. This file contains a constants that centralizes URLs. It also contains a function for retrieving configuration settings and constructing download paths.

The key component for downloading files is a function that handles downloading files from URLs and extracting compressed archives. Under the hood, it delegates the actual downloading. Some downloads are cached on disk for performance.

…/download_checks.py stores expected sizes and checksums used to verify downloads.

Model Training

References: fastai, nbs/examples

The …/learner.py file provides high-level utilities for training PyTorch models. The core class is Learner, which combines a model, data loaders (…/load.py), loss function, and callbacks into a single object. Its main methods orchestrate the overall training loop by calling callbacks at appropriate points.

The Learner handles running the full training loop that invokes callback methods. It fits models using the fit() method, which implements the one-cycle learning rate schedule inside a context manager. This distributes training across multiple GPUs/machines. The Metric base class defines the interface for metrics computed during training. The AccumMetric class inherits from this and implements averaging the metric over batches to account for varying batch sizes.

The _BaseOptimizer class defines common functionality shared by Optimizer and OptimWrapper. The Optimizer class extends _BaseOptimizer and serves as the base class for implementing custom optimizers. Callback functions define optimization steps. Common optimizers are implemented by composing callbacks in Optimizer. The OptimWrapper class interfaces fastai training with PyTorch optimizers.

The Learner's fit() method orchestrates the overall training loop. It handles optimization, losses, metrics, and other training mechanics by coordinating callbacks, and the model. The Optimizer and callback classes implement various optimization algorithms and training enhancements. This provides practitioners with a high-level yet customizable interface for efficient model training.

Training Loop

References: fastai/learner.py

The training loop handles the overall flow of training. It contains the model, data loaders, loss function, optimizer, and callbacks. During training, it orchestrates the process by calling callbacks at each step.

Some aspects:

  • Metrics monitor performance during training.

  • Callbacks accumulate metrics over batches.

  • Callbacks provide hooks for preprocessing before training.

  • At each step, predictions, loss, gradients, and weights are computed.

  • After each batch, metrics and losses are calculated and accumulated.

  • Validation is run at the end of each epoch to calculate final metrics/losses.

  • Callbacks run at different points to inject additional logic.

Callbacks

References: fastai/callback, fastai/callback/core.py

Callbacks allow injecting custom logic into the training loop at different points. Key callbacks customize training by running code at the start and end of epochs.

The …/core.py file defines several important callback classes including the base callback class, which provides the core callback API.

Callbacks in other files allow tasks like accumulating gradients over batches, gradient clipping, freezing batch norm stats, and early stopping.

Callbacks in …/schedule.py implement learning rate and hyperparameter scheduling.

Callbacks in …/tracker.py extend a base class to track metrics over epochs. This allows automatically adjusting hyperparameters or saving the best model based on the monitored metric.

Optimizers

References: fastai/optimizer.py

The …/optimizer.py file provides implementations of common optimizers for updating model weights during training. Optimizers are implemented by composing callback functions that define the optimization steps.

Common optimizers are implemented by composing the necessary callback functions in an object. The callbacks implement the specific update logic, while handles common logic like parameter grouping.

interfaces fastai training with PyTorch optimizers. It takes a PyTorch optimizer and exposes it through the fastai API. This allows using PyTorch optimizers seamlessly with fastai training loops.

The class defines common functionality shared between the and classes. extends and serves as the base class for implementing custom optimizers. It separates the step logic from the optimizer class.

The class implements lookahead optimization, which has been shown to improve model training. It works by making an optimization step on a "fast" set of weights and then updating the "slow" set of weights (which are exposed to the model) based on the "fast" weights.

Learning Rate Schedulers

References: fastai/optimizer.py

The …/optimizer.py file implements various learning rate schedules for fast and effective model training. Learning rate schedules allow dynamically adjusting the learning rate during training to improve optimization.

Some key learning rate schedules implemented include:

  • One-cycle policy: This implements a learning rate schedule that rapidly increases then decreases the LR over the course of training. It first reaches a maximum value then decreases, resembling one cycle. This has been shown to train models much faster than static learning rates.

  • Cosine annealing: Gradually reduces the LR over training by following a cosine function, which slowly decreases to minimize loss of information during optimization.

  • Polynomial decay: Reduces the LR according to a polynomial decay function, which slowly decreases LR as a power function of the training step.

These schedules are implemented via callbacks that modify the learning rate during different phases of training.

Losses

References: fastai/losses.py

The …/losses.py file provides commonly used loss functions for training deep learning models on different types of tasks.

Some key classes implemented in this file are:

  • Focal loss: Applies focusing to cross entropy loss for classification.
  • Label smoothing cross entropy flat: Smoothes one-hot labels for regularization.
  • Dice loss: Computes Dice coefficient as a loss for segmentation tasks.

Metrics

References: fastai/metrics.py

The …/metrics.py file provides implementations of many common machine learning metrics for monitoring model training. It contains both individual metric functions and subclasses that accumulate metrics over batches.

The class allows accumulating predictions and targets over batches, then calculates the final metric value at the end. This is more efficient than calculating the metric on each batch. It supports preprocessing like activation functions and argmax.

Common classification metrics directly calculate the metric on each batch. But subclasses accumulate predictions and targets over batches for better performance on larger datasets. This includes metrics for multi-label classification and segmentation.

Some subclasses accumulate values for:

  • Classification accuracy
  • The F-beta score
  • Mean squared error in regression

The class takes care of accumulating predictions, targets, and calculating intermediate values over batches. When the final metric value is requested, it performs the final calculation. This allows efficient parallelization across batches during training.

Metrics can have parameters like the threshold optimized. They also support activation functions and argmax for converting predictions to classes.

Utilities

References: fastai/torch_core.py

This section covers helper functions provided in …/torch_core.py that simplify training PyTorch models. Key functionality includes:

  • Functions help initialize model parameters in a consistent way.

  • Utilities integrate distributed training functionality.

  • Functions in …/torch_core.py help load and save tensors to disk.

Key functionality includes:

  • Initialization functions handle setting the weights of models
  • Utilities integrate distributed training
  • Functions save and load model state to disk

Distributed Training

References: fastai/distributed.py

The …/distributed.py file provides functionality for distributing model training across multiple GPUs or machines. It handles wrapping models and data for parallel computation.

Processes are initialized on each device and losses/metrics are gathered to update models. Gradients are also synchronized across devices during the backward pass.

Mixed Precision

References: fastai

Mixed precision training with float16 can accelerate training on GPUs by performing operations with lower precision numbers while still tracking the model parameters in float32 for better accuracy. This allows utilizing the GPU's tensor cores which provide a significant speedup for float16 operations.

The …/fp16_utils.py file contains utilities for working with half precision (FP16) in PyTorch models. It provides functionality to convert a tensor to FP16 format. It also provides functionality to convert a model to FP16 in a batchnorm-safe way. This ensures batchnorm layers continue tracking the mean and variance in float32 to avoid accuracy degradation, while all other layers use FP16 operations.

The …/fp16_utils.py file also contains functionality for synchronizing the FP16 model weights with the FP32 master weights stored in the optimizer. It provides functionality to retrieve the FP32 master copy of parameters from the optimizer. Functionality moves gradients from the FP16 model to the FP32 master copy after each backward pass. Functionality then synchronizes the FP16 model weights with the master copy.

The …/fp16_utils.py file implements checks for overflow in the FP16 gradients with functionality like to check for overflow on a tensor.

Callbacks

References: fastai/callback, nbs/examples

The core functionality of callbacks in fastai is to customize model training by injecting logic at different points in the training loop. Callbacks allow injecting code before, after, or during batches, epochs, and entire training runs. This provides a flexible way to implement techniques like learning rate scheduling, regularization, mixed precision training, and distributed training without modifying the core training loop code.

The …/__init__.py file defines the base callback functionality. Subclasses can override callback methods.

The …/core.py file defines important callback functionality.

The …/schedule.py file contains classes and functions for implementing various learning rate schedules during training.

The …/tracker.py contains callbacks to track metrics over multiple batches/epochs. It contains callbacks that save the best model based on a tracked metric, reduce the learning rate if no improvement occurs, and terminate training if the loss becomes invalid.

Data Augmentation Callbacks

References: fastai/callback/mixup.py

The …/mixup.py file implements callbacks for data augmentation techniques during model training.

It uses sampling of lambda values and adjusting examples and targets to help neural networks generalize.

Optimization Callbacks

References: fastai/callback/schedule.py, fastai/callback/tracker.py

The …/schedule.py file implements callbacks that customize optimization during training by adjusting hyperparameters like the learning rate.

It contains a class that defines a basic scheduler that takes a function and partially applies it to define a scheduling function between two values.

Functions like fit_one_cycle() and fit_sgdr() combine scheduling functions and to directly fit the Learner with common schedules.

The …/tracker.py file contains a base class that serves as a base for callbacks that monitor a metric over epochs. It stores the best metric seen and compares after each epoch.

A callback reduces the learning rate if the monitored metric does not improve for a number of epochs. It divides the learning rate by a factor but does not reduce below a minimum value.

Regularization Callbacks

References: fastai

The main regularization callback implemented in fastai applies weight decay during training. This callback calculates the L2 norm of each parameter after the backward pass. The L2 norm is accumulated as a regularization loss term, which gets optimized along with the main training loss function. This helps prevent overfitting by discouraging reliance on a few strong weights.

Configuring regularization only requires setting the strength of regularization. A higher value results in stronger regularization pressure.

By overriding a single method, regularization integrates seamlessly with the existing training loop in fastai. This makes regularizing models during training very simple.

Logging Callbacks

References: fastai/callback/progress.py

The …/progress.py file contains callbacks that handle logging training progress and metrics to files. These callbacks provide a consistent interface for monitoring and recording a model's training progress.

The core callbacks in this file are used for logging training progress, metrics, losses and other statistics to files as models train. The …/progress.py contains several callbacks that serve different logging purposes:

  • The …/progress.py contains the …/tracker.py callback which logs metrics like loss and accuracy to a CSV file at every iteration. This allows closely monitoring how the metrics change throughout training.

  • The …/progress.py also contains callbacks for displaying training progress via console printouts or GUI progress bars. This helps users visualize training progress.

  • Additional callbacks in …/progress.py handle logging hyperparameters and other debug information to help replicate models after training.

These callbacks provide a unified interface for reporting training progress, logging results to files, and analyzing how metrics change over the course of training. The callbacks in this file are crucial for monitoring, analyzing, and debugging the training process.

Model Analysis Callbacks

References: fastai/callback/hook.py

This section covers callbacks that can be used during model training for interpretation, analysis, and debugging purposes. The …/hook.py file contains utilities that allow inspecting and analyzing models.

Together, these utilities in …/hook.py allow debugging models by inspecting activations, checking model sizes and shapes. They provide critical functionality for model analysis callbacks during training.

Distributed Training Callbacks

References: fastai

The …/distributed.py file contains utilities for distributing training across multiple GPUs or machines. It provides functionality for wrapping models and handling distributed training.

Functions in the file set up processes:

Context managers adapt a learner object for parallel and distributed training:

  • Context managers initialize distributed wrappers around the model.

So in summary, this file provides functionality to distribute training across devices using wrappers and context managers. Functions and managers handle setting up distributed training.

FP16 Training Callbacks

References: fastai/callback/fp16.py

The …/fp16.py file contains callbacks that enable mixed precision training with float16. During mixed precision training, parameters are stored in float16 format to save memory and speed up computation, while gradients are kept in float32 for numerical stability.

Utilities for getting FP16 and FP32 copies of the model parameters include maintaining separate FP16 and FP32 copies of the model parameters. Functions for moving gradients between the FP16 and FP32 copies during the backward and optimization passes are also provided.

Gradient overflow during backpropagation is also checked. Patch functions are provided to easily add mixed precision functionality to a learner. When applied as a callback, it converts the model to FP16 format and handles the FP16/FP32 synchronization during training.

Medical Imaging Callbacks

References: fastai/medical/imaging.py

The …/imaging.py file contains specialized callbacks for preprocessing medical imaging data during model training. Metadata and pixel data can be represented differently.

Pixel data is loaded from DICOM files. Functions find files and apply preprocessing like normalization. An interface provides loading data into PyTorch loaders.

Callbacks could apply preprocessing during training, such as normalization. Another callback might handle loading with preprocessing.

Interpretability

References: fastai

The fastai library provides tools for interpreting models implemented in …/interpret.py.

The main class handles obtaining predictions, losses from models. It stores these along with the model and data loader in an object. This object provides an interface for interpreting models.

The method drives the interpretation process. It calls methods for encoding inputs, computing an attribution map using algorithms, and visualizing results.

Additional interpretation is implemented in …/captum.py. The class leverages Captum to compute attribution maps and visualize them.

Functions in …/hook.py allow inspecting activations.

The object exposes predictions, losses enabling analyzing model behavior.

Model Analysis

References: fastai/callback/hook.py

The …/hook.py file contains utilities for analyzing models during and after training. It provides functions for inspecting models by passing dummy data through them.

Model Debugging

References: fastai/callback/hook.py, fastai/interpret.py

This section covers identifying and debugging errors in models. The …/hook.py file contains utilities for inspecting models.

The …/interpret.py file provides classes and functions for model interpretation.

Key functionality includes:

  • Evaluating models on dummy data
  • Finding examples with large losses

This allows debugging models by inspecting activations, visualizing predictions on problematic examples, and identifying inputs that result in large losses or errors.

Visualization

References: fastai/callback/hook.py, fastai/interpret.py

The …/hook.py file contains utilities for visualizing models.

The …/interpret.py file contains classes and functions for visualizing model predictions and analyzing model behavior.

Together these utilities provide programmers with tools to inspect model activations, analyze model architecture and capacity, and visualize predictions through a clean interface.

Metrics

References: fastai/metrics.py

The …/metrics.py file contains implementations of many common machine learning metrics. It provides individual metric functions as well as the class for accumulating metrics over batches during training.

The class allows accumulating predictions and targets over batches, then calculates the final metric value at the end. This is more performant than calculating the metric on each batch individually. The class supports preprocessing of predictions and targets through transforms.

Classification metrics directly calculate the metric on each batch. Subclasses accumulate predictions and targets over batches for better performance on larger datasets.

The file also contains implementations of many Scikit-Learn metrics converted to Fastai's framework. It supports both single-label and multi-label classification through metrics. Metric parameters can also be optimized during training through the class. Finally, specialized metrics are provided for semantic segmentation tasks.

Losses

References: fastai/losses.py

The …/losses.py file provides loss functions that can be used for model analysis during and after training. It contains common losses that are useful for training deep learning models. Additionally, it implements losses designed for semantic segmentation tasks.

Losses calculate standard losses and apply parameters to downweight examples. Losses one-hot encode targets before computing the loss function.

All losses in this file provide a flattened, easy-to-use interface on top of PyTorch losses. They can be used both for training models as well as analyzing trained models. Losses in particular are useful for specialized tasks like segmentation.

Medical Data

References: fastai/medical

The …/medical directory contains functionality for medical imaging and text data. It provides utilities for loading, preprocessing, and analyzing medical image and text data.

The …/imaging.py file contains interfaces and functions for medical images. Functions implement common preprocessing techniques.

For medical text, the …/text.py file implements main functionality. It contains classes and functions for clinical notes. Preprocessing functions implement cleaning of raw medical notes.