logoAuto Wiki by


Auto-generated from openai/openai-python by Auto Wiki

GitHub Repository
Written inPython
Watchers 251
Last updated2024-01-06
LicenseApache License 2.0
Auto Wiki
Generated at2024-01-06
Generated fromCommit f1c7d7

The openai-python repository provides an official Python client library for easily interacting with various OpenAI APIs. It handles authentication, making requests, and working with responses.

The key components include:

  • The main client in …/openai handles API keys, constructing requests, sending them, and parsing responses into Python objects. It provides both synchronous and asynchronous clients.

  • Resources in …/resources represent API endpoints and provide Pythonic wrappers exposing functionality through method calls. These abstract complexities of directly calling endpoints. Resources exist for major APIs like fine-tuning, chat, and audio.

  • Utilities in …/_utils and …/_extras provide helpers for data transformation, logging, streams, NumPy and Pandas support. These are leveraged throughout the library.

  • Models defined in …/types standardize representations of API request/response data using inheritance and validation. This provides consistency.

  • The CLI in …/cli allows interacting with OpenAI APIs directly from terminal commands for convenience.

The library focuses on usability, configurability, and robustness. Key design choices include separating concerns into modules and classes, handling compatibility issues transparently, and providing sync/async/CLI options.

Tests in tests validate functionality through unit, integration and end-to-end tests. Fixtures and utilities assist testing.

Examples in examples demonstrate real-world usage like assistants and streaming.

Overall openai-python aims to make working with OpenAI seamless from Python through abstraction, validation and tooling support.

Client Functionality

References: src/openai

The main classes that handle authentication, requests, and responses are defined in the …/lib directory. This directory contains core functionality of the OpenAI Python SDK.

The …/ file contains validator functions that are used to analyze a dataset.

Request and response processing is abstracted in …/ Classes inherit methods for making requests.

Authentication is handled via mechanisms defined in …/lib. API keys can be specified during initialization. Environment variables are also checked to auto-configure credentials.

The …/ file defines classes used for modeling and validating API data. It handles compatibility with different versions of Pydantic.

Core Client

References: src/openai/, src/openai/

The …/ file implements the core functionality for interacting with the OpenAI API. It handles authentication, sending requests, and processing responses. It provides a common interface for making calls to different API endpoints.

Exceptions are defined in …/ This file defines exception classes that inherit from a base exception class and allow raising more specific error types from the API.

Configuration is handled through environment variables or default values to support different configuration options instead of only constructor arguments. This provides flexibility. It also handles potential issues with credentials and raises errors.


References: openai-python

The …/ file handles authentication for the Azure API. It initializes clients by validating arguments and constructing URLs. It supports different auth methods.

The main class defined initializes clients by validating arguments and constructing URLs. It supports different auth methods. To authenticate requests, it overrides the method to inject the AAD access token into requests as a bearer token in the Authorization header.

By inheriting common request logic but overriding token injection, the class provides a fully-featured authenticated client to the OpenAI API via Azure. It supports the most common AAD authentication methods like client secret, ensuring Python apps can securely access models hosted on Azure.


References: openai-python

The …/ file handles deserializing API responses into Python objects. A main class is defined in this file. This class overrides a method. This allows the class to recursively deserialize nested objects in API responses.

Type information from model fields is used to determine how to deserialize nested values. For example, if a field is defined as a list, it will deserialize each element in the list. If a field is defined as another class, it instantiates that class for the value.

Another important function defined in …/ handles coercing different types like dictionaries, lists, and primitives to match the target type. This function is used during deserialization.

By overriding parsing and defining type construction utilities, the models module provides flexible and robust deserialization of API responses into Python objects. Programmers can define models to match response formats.


References: examples

This section demonstrates usage of the SDK through code examples involving conversational assistants and streaming functionality. Examples show how to initialize clients and interact with models to handle tasks like chat conversations.

The examples directory contains Python files that illustrate common usage patterns for the SDK. Key files include:

  • …/ demonstrates creating an AI assistant. It initializes a client, creates an assistant called "Math Tutor", starts a thread, sends a message to prompt the assistant, runs the assistant on the thread, polls the status, and prints/deletes the assistant.

  • …/ shows asynchronous streaming of completions. It imports a client class, gets an API key, defines an async main function using the client to create a completion stream with a prompt, and asynchronously iterates over the stream printing each completion.

  • …/ contains examples of synchronous and asynchronous streaming. For synchronous streaming, it manually retrieves the first completion and iterates over the response. For asynchronous streaming, it automatically iterates through and prints streamed responses.

These examples demonstrate key client initialization patterns and API usage like conversational assistants and streaming functionality. Programmers can reference these files to understand common tasks.

Conversational Assistants

References: examples/

This code demonstrates how to create an AI assistant and have a conversation with it using the OpenAI Python SDK. The …/ file shows how to initialize a client and then:

  1. Create an AI assistant called "Math Tutor"

  2. Start a new thread

  3. Send an initial message to kick off the conversation

  4. Begin interacting with the assistant on the thread to generate responses

  5. Check the status of the interaction in a loop until it completes

  6. Print the final messages in the thread

  7. Remove the assistant when done

The file initializes a client that is used to interact with the API and manage the assistant resource. It creates the "Math Tutor" assistant and gets an ID. An empty thread is started. A message is sent to prompt the assistant.

An interaction is initiated on the thread. The status is polled in a loop until complete. Once finished, the full conversation is retrieved and printed. The assistant is deleted at the end.

Audio Processing

References: examples/

The …/ file demonstrates OpenAI's audio APIs for processing speech with text-to-speech, speech recognition, and translation functionality. It initializes an OpenAI client and uses functions defined in the client to generate and process audio. A full workflow is demonstrated of generating speech from text, transcribing audio back to text, and translating between languages. The main classes used are the OpenAI client for authentication and making requests.

Cloud Models

References: examples/, examples/

This section covers interacting with models hosted on Microsoft Azure through the OpenAI API. The examples …/ and …/ demonstrate connecting to models deployed to Azure and obtaining completions.

…/ shows importing a class to initialize clients that can connect to Azure deployments. It creates a client without specifying an endpoint, which connects to default models. Another client is specified with an Azure endpoint and deployment, connecting to a specific model. Both clients call a method to generate completions from the respective models.

…/ focuses on authentication for Azure models. It imports functions to handle Azure Active Directory (Azure AD) tokens. A function retrieves a token provider using default Azure credentials. This client handles authentication and can make requests to Azure models. A method is called on the client to chat with a model and get a response.

Both examples demonstrate the key steps to:

  • Import classes for Azure clients and authentication
  • Initialize clients configured for specific Azure deployments
  • Call methods on clients to interact with models through completion, chat, etc.


References: examples/

The …/ file demonstrates handling authentication to the OpenAI API using Azure Active Directory (Azure AD) tokens. It first imports functions for getting an Azure AD token provider using default Azure credentials. This token provider, stored in a variable, can then be used to initialize an authenticated OpenAI client class. The client handles making requests to the API and attaching the necessary authentication headers with each request. It demonstrates a simple conversation by calling a method on the client to send a prompt and receive a response from one of the API's conversational models.

The key aspects of authentication covered in this file include:

  1. Importing the necessary libraries for Azure AD authentication, including functions for getting a token provider using default credentials.

  2. Initializing an authenticated OpenAI client by passing the Azure AD token provider. This client can then make authenticated requests.

  3. Making a request via the client to chat with a model and see the response. This shows how the client automatically attaches the authentication headers to outgoing requests.

By handling the authentication details behind the scenes, this client abstracts away the complexities of attaching tokens to requests and allows the user to focus on interacting with the OpenAI API models in a simple way via method calls. It demonstrates how the SDK can provide an easy interface for accessing protected API endpoints after authenticating with Azure AD tokens.

Request Types

References: examples/, examples/, examples/

This section discusses how the OpenAI API client handles different types of requests. The examples demonstrate creating requests using clients.

For standard requests, the …/ file shows initializing a client without specifying an API key. It then makes a standard non-streaming request by a method and passing a simple message. The response is printed.

Streaming requests are demonstrated in …/ and …/ These files show calling a method on the client and setting a parameter to enable streaming. For synchronous streaming, the response is iterated over manually and the JSON is printed.

The …/ file contains examples of both synchronous and asynchronous clients for streaming requests. For synchronous requests, it shows how to manually retrieve the first completion from the response iterator and print the JSON. It also shows how to loop through all data. For asynchronous requests, it retrieves the first completion result, then uses a construct to automatically iterate through and print all streamed responses.

Module Usage

References: examples/

This section covers calling functions on an OpenAI API client without using classes. The file …/ demonstrates this functionality. It imports and configures the client.

It then calls a function directly on the client to start a conversation stream with an AI model. This function returns an object. A loop iterates over each "chunk" returned from the object. The response is printed by accessing an attribute of the first element in a list of the chunk.

No classes are defined, allowing the client library to be used through direct function calls like the function. This starts an asynchronous conversational session that prints out responses from iterating over the return value. The object and iteration enables receiving multiple responses from the API.


References: tests

The tests in the tests directory validate that the OpenAI Python SDK functions as intended by verifying its interactions with the OpenAI API. Comprehensive testing is crucial to ensure correct behavior and prevent regressions as the API evolves.

Tests are divided into unit, integration, and test utility subdirectories. Unit tests isolate individual components, while integration tests call live API endpoints to validate client functionality. Test utilities provide common fixtures and helpers.

The most important files for validating API behavior are found in …/api_resources. These contain integration tests that exercise the SDK by calling API endpoints with a variety of parameters. Tests are split into subdirectories mirroring the API, such as /audio and /fine_tuning.

Within each subdirectory, test classes initialize clients for synchronous and asynchronous testing. Tests assert responses against expected types using functions from …/

This file provides fixtures that create a clean event loop per test.

Together, these integration tests cover normal usage scenarios while also stressing error conditions. Comparing parsed and raw responses enhances robustness. The comprehensive approach helps prevent regressions from API changes.

Unit Tests

References: tests/lib, tests/test_utils

The …/lib directory contains unit tests that validate individual components of the openai Python library in isolation from each other. These tests help ensure basic functionality and catch errors early by testing pieces of the code separately.

Two key test files in this directory are …/ and …/

In summary, these unit test files isolate and validate individual components to ensure they operate as expected without dependencies on other parts of the system. This allows developers to easily pinpoint issues and refactor code safely.

Integration Tests

References: tests/api_resources

The …/api_resources directory contains integration tests for the OpenAI API via the OpenAI Python SDK. It aims to thoroughly test the API implementations by validating response formats, types, and headers across the main CRUD operations.

The tests provide validation of the Python SDK against the live OpenAI API. They are split into subdirectories for different API resources, such as …/audio for audio tests.

Within each subdirectory, test files initialize clients for synchronous and asynchronous testing. Tests are parametrized to run with both clients.

The test files call API endpoints with different parameters, assert response types and contents match expectations, and validate pagination. They also test error cases and client behavior.

Some key business logic:

  • The …/ file defines classes used to represent and interact with API resources. These abstract away HTTP requests.

  • Test files set up clients for sync and async testing.

  • A function validates response types match expectations defined in type modules.

  • Tests utilizing functions directly assert properties of raw responses like headers.

  • Fixtures like in …/ set up test data for multiple scenarios.

By thoroughly testing each major API endpoint, parameter combinations, and response handling, these integration tests provide a robust validation of the Python SDK implementation against the live OpenAI API. The focus on response formats helps prevent regressions as the API evolves.

Test Utilities

References: tests/

The …/ file provides common fixtures and utilities that are helpful for tests. It contains a fixture which creates a new event loop for each test session. This ensures each test has a clean event loop to run on without any dangling references from prior tests. The event loop is yielded to the test and then closed after the test completes.

This file also sets the log level for the logger named "openai" to DEBUG, enabling more verbose debug logs. It registers a custom assert rewrite located at "tests.utils" to customize test failures.

Some key functionality provided in this file includes:

  • A fixture which scopes the event loop to each test session
  • Logging configuration for the "openai" logger
  • Registration of a custom assert rewrite handler

These utilities provide a clean isolated testing environment and tools to help with debugging tests. The fixture in particular prevents issues where tests could interfere with one another's asynchronous operations.

Command Line Interface

References: src/openai/cli

The OpenAI Python SDK provides a command line interface (CLI) for interacting with the OpenAI API and related tools. The CLI functionality is primarily implemented in the …/cli directory and its submodules.

The …/ file acts as the main entry point. It imports the main logic from …/, which defines classes for argument parsing and validation using Pydantic. A function implements the argument parser and registers main commands.

Configuration of the OpenAI client happens via setting values based on the parsed arguments. Errors are caught and displayed nicely using logic from …/ Proxies can also be configured from the arguments.

The …/_api subdirectory registers CLI commands for the OpenAI API endpoints by calling methods defined in individual API modules. These modules define classes to represent command arguments and contain functions implementing the API logic.

The …/_tools subdirectory registers tools commands like model migration via …/ This file acts as a registry, importing functions from modules defining the actual commands.

Utilities like loading clients and formatting output are provided in …/ Progress tracking uses classes in …/ Pydantic models are defined in …/ for validation.

CLI Functionality

References: src/openai/cli, src/openai/cli/, src/openai/cli/

The main CLI logic and entrypoints are contained in the …/cli directory. This directory provides a command line interface for interacting with the OpenAI API and related tools.

The main entry point is the …/ file, which imports a function from the …/ module. This module contains the core implementation of the CLI.

A function implements the argument parser, registering the main commands by adding parsers for subcommands and options.

Configuration of the OpenAI client happens by setting values based on the parsed arguments. Errors are caught and displayed nicely using logic from the …/ module.

Common CLI utilities are provided in the …/ module, including functions for loading clients and formatting output.

API Commands

References: src/openai/cli/_api, src/openai/cli/_api/, src/openai/cli/_api/

The …/_api directory provides a command line interface (CLI) for interacting with different OpenAI API endpoints. It registers CLI commands through submodules that each expose functionality for a specific API.

The …/ file imports submodules and calls each one's method. This integrates the API functionality into the CLI structure.

The …/chat module registers parsers for the chat completion APIs.

The …/ file contains methods for listing, getting and deleting models.

The …/ implements methods to expose file management functionality through subcommands.

Each submodule's methods define relevant argument models and maps methods to handle specific tasks like creation, deletion and retrieval. These methods call the OpenAI API through a client instance to execute the associated actions.


References: src/openai/cli/_tools, src/openai/cli/_tools/, src/openai/cli/_tools/

The Tools section covers various utilities provided by the OpenAI CLI for tasks like migrating models between systems and preparing data for fine-tuning models.

The core functionality is implemented in three Python modules:

  1. …/ exposes an interface for registering commands with the CLI parser. This separates the interface from the implementation.

  2. …/ acts as a central registry, importing commands from other modules and registering them with the CLI.

  3. …/ contains functions for migrating models between systems.

…/ centralizes command registration by importing functions from other modules and calling them to register the commands.

The …/ module implements the main functionality for preparing data for fine-tuning.

Chat Completions

References: src/openai/cli/_api/chat, src/openai/cli/_api/chat/, src/openai/cli/_api/chat/

The file …/ defines a command line interface for creating chat completions. It registers a 'chat.completions.create' subcommand that parses arguments for creating completions.

This file defines structures the CLI arguments. A named tuple represents a conversation message with a role and content.

The main class handles creation of completions. It has static methods for the non-streaming and streaming creation workflows. For non-streaming, it constructs an object from the CLI arguments, calls the API directly, and prints results. For streaming, it yields completion chunks from a generator as responses are received asynchronously.

Union types allow different parameter structures. Typing is used extensively for validation. The client is retrieved to make direct API calls. Streaming uses a generator to asynchronously yield results.

This provides a clean CLI to create chat completions. Data classes and typing structure inputs and outputs while orchestrates the request workflow. Union types and generators enable streaming functionality.


References: src/openai/cli/

The …/ file contains common utility functions used throughout the command line interface (CLI). It includes functions for common CLI tasks.


References: src/openai/cli/

The …/ file defines Pydantic models for validating API requests and responses. Pydantic is a library used for data validation and settings management using Python type hints.


References: src/openai/cli/

The …/ file contains utilities for tracking progress of long-running operations and allowing cancellation. It defines a custom exception class for signaling cancellation.

The key component tracks read progress by calling a callback function on each read. This allows canceling reads by throwing from the callback.

A function initializes a progress bar with a total count and description. It returns a callback that simply updates the bar on each call. This provides an interface to track progress of an operation and allow cancellation from the callback.


References: src/openai/resources/fine_tuning

The …/fine_tuning directory provides interfaces for managing fine-tuning jobs through the OpenAI API. The main classes for interacting with fine-tuning models and jobs are defined in …/

The class defined in …/ is the main interface for interacting with jobs. It handles requests to the API using the initialized client. Methods make requests to get job details. Pagination is supported, and responses are converted to model objects.

Chat Completions

References: src/openai/resources/chat

The …/chat directory provides Python interfaces for interacting with OpenAI's Chat and Completions APIs. It contains classes that handle the business logic of making requests to these APIs and parsing responses. Classes are exported from …/ for chat completions functionality.

These classes initialize with authentication details and provide methods for generating completions. The key method is defined in …/ and takes parameters. This handles requests and returns responses. Overloaded versions support streaming.

The classes inherit from classes in …/ that handle requests. They use validation to validate parameters. Parameters are transformed before sending.

Variants directly return the raw HTTP response without parsing.

The business logic focuses on defining interfaces that abstract request details. Key aspects include handling prompt text and options. Input validation and response handling is automated. This provides simplified access to functionality.

Audio APIs

References: src/openai/resources/audio

The …/audio directory provides the main entry points for interacting with OpenAI's audio APIs through classes defined in several key files. The …/ file collects all audio functionality in one place by importing classes from other modules and re-exporting them.

The …/ file defines classes that initialize subclasses handling different audio functionality defined in other files, including subclasses for speech and translations.

Classes in …/ provide the interface for audio translation. Methods construct POST requests to pass an audio file and parameters to API endpoints. Variants return the raw response instead of parsing it.

Speech Functionality

References: src/openai/resources/audio/

The …/ file provides functionality for text-to-speech through the OpenAI API. It contains classes for generating speech from input text.

The main classes handle the core text-to-speech logic. A method takes parameters like the input text, TTS model, voice, response format, and speed. It constructs a POST request to the TTS endpoint, with an optional request body. This allows passing extra headers, query, and body parameters.

The method returns the audio data from the response content. Inner classes provide versions of the method that directly return the raw HTTP response instead of just the content, enabling access to full response details.

The classes inherit from classes in …/openai to provide common API request functionality like parameter handling and response handling. This provides a high-level Python interface for the TTS API while also exposing lower-level details through the raw response wrappers.


References: src/openai/resources/audio/

The …/ file provides functionality for audio file transcription. It contains a class which handles uploading audio files and sending transcription requests to the OpenAI API.

The class supports synchronous transcription requests. It has a method that uploads the audio file and sends the request to the API. This method accepts parameters like the audio file, transcription model, optional language, prompt, response format, and temperature.

The class provides a Python interface for the OpenAI audio transcription API. It handles file upload, validation of request parameters, and parsing the JSON response. This simplifies transcription tasks for developers interacting with the API.


References: src/openai/resources/audio/

The …/ file provides functionality for translating audio files between languages. It contains classes that handle synchronous and asynchronous audio translation.

A key method takes an audio file and parameters and makes a POST request to perform the translation. It constructs the request body and makes the request using methods defined in parent classes in …/audio. If an audio file is provided, it sets an "Accept" header to support file uploads. Extra parameters can be passed to customize the request.

The classes inherit from classes in …/audio which provide functionality for interacting with the API, such as making requests. One class parses the response into an object, while another returns the raw response.

Beta APIs

References: src/openai/resources/beta

The …/beta directory provides access to OpenAI's beta APIs. It contains modules for interacting with threads, which allow multi-turn conversations with models.

The …/threads module implements classes for managing threads through the API.

The …/messages module contains classes for accessing messages. Classes make synchronous and asynchronous requests to interact with messages.

The …/runs module handles runs resources. Classes provide functionality for runs through API requests and pagination.

Classes provide a Pythonic interface for functionality by initializing clients and making requests, handling requests and common logic. This allows interacting with resources through method calls.

General Beta Resources

References: src/openai/resources/beta, src/openai/resources/beta/, src/openai/resources/beta/

The …/ file provides base functionality for accessing OpenAI beta API resources. It defines classes that initialize clients and handle making synchronous requests.

This file focuses on providing classes that initialize lower level subclasses and provide a natural interface for accessing OpenAI beta resources. The implementation handles common request logic while delegating specific functionality to other modules.


References: src/openai/resources/beta/assistants, src/openai/resources/beta/assistants/, src/openai/resources/beta/assistants/

The file …/ provides functionality for managing files associated with assistants. It contains methods for creating, retrieving, and deleting files.

Methods like:


handle making HTTP requests to upload, download, and remove files. Model types are used to deserialize API responses, and pagination is supported for listing large numbers of files.


References: src/openai/resources/beta/threads, src/openai/resources/beta/threads/, src/openai/resources/beta/threads/

The …/ file provides the primary interface for interacting with chat threads. It initializes sub-resources defined in other files.

Key methods include creating and retrieving threads. When methods are called, they construct URLs and make requests by calling inherited methods handling common logic.

Request options and messages can be passed as arguments, processed by utilities before requests. Responses are deserialized into models.

Asynchronous functionality is also provided, initializing async versions of sub-resources and wrapping methods.


References: src/openai/resources/beta/threads/messages, src/openai/resources/beta/threads/messages/, src/openai/resources/beta/threads/messages/, src/openai/resources/beta/threads/messages/

The …/messages directory handles interacting with message threads and files through the OpenAI API. It contains classes for both synchronous and asynchronous access to message resources.

The …/ file defines two classes that implement functionality for retrieving and listing files. These classes contain methods that make requests to the API and handle response deserialization.

An asynchronous version is also provided to support asynchronous operations. Classes also exist which return raw HTTP responses instead of modeled data.

By separating concerns and providing both synchronous and asynchronous capabilities, this code implements a full-featured Python client for the OpenAI message resources.


References: src/openai/resources/beta/threads/runs, src/openai/resources/beta/threads/runs/, src/openai/resources/beta/threads/runs/, src/openai/resources/beta/threads/runs/

The …/runs module provides functionality for interacting with run and step resources from the OpenAI API. It defines classes that allow accessing these resources both synchronously and asynchronously.

The main classes for interacting with runs are defined in …/ These classes initialize by passing the client and provide methods like retrieving and listing runs.

Additional classes are defined which also return the raw HTTP response.

The main classes for interacting with steps are defined in …/ These classes construct URLs for the step endpoints and make requests. The responses are deserialized into model objects representing steps.