Mutable.ai logoAuto Wiki by Mutable.ai

ml-agents

Auto-generated from Unity-Technologies/ml-agents by Mutable.ai Auto WikiRevise

ml-agents
GitHub Repository
DeveloperUnity-Technologies
Written inC#
Stars16k
Watchers551
Created09/08/2017
Last updated04/04/2024
LicenseOther
Homepageunity.comproductsmachine-learning-agents
RepositoryUnity-Technologies/ml-agents
Auto Wiki
Revision
Software Version0.0.8Basic
Generated fromCommit fb2af7
Generated at04/04/2024

The Unity ML-Agents Toolkit is a powerful framework for training intelligent agents using deep reinforcement learning and imitation learning. It provides a set of tools and APIs that allow developers to create, train, and deploy reinforcement learning agents in Unity environments. The toolkit is particularly useful for game developers and researchers who want to leverage the capabilities of Unity to build complex, realistic environments for training AI agents.

The most important components of the ML-Agents Toolkit are:

  • The …/Runtime directory, which contains the core functionality for managing agents, sensors, actuators, communication with the training process, and various utility classes and data structures. This includes the Agent class, which represents an individual agent in the environment, and the Academy class, which is responsible for managing the overall training process.
  • The …/trainers directory, which contains the implementation of various reinforcement learning algorithms, such as Proximal Policy Optimization (PPO), Soft Actor-Critic (SAC), and Proximal Optimistic Counterfactual Advantage (POCA). These algorithms are used to train the agents' decision-making policies.
  • The ml-agents-envs directory, which provides the core functionality for managing the communication and interaction between the Python-based training code and the Unity-based environment. This includes wrappers and utilities for integrating Unity environments with external frameworks like Gym and PettingZoo.

The ML-Agents Toolkit relies on several key technologies and algorithms:

  • Deep Reinforcement Learning: The toolkit uses deep neural networks to learn the agents' policies, allowing them to make complex decisions in dynamic environments. The various training algorithms implemented in the …/trainers directory are responsible for optimizing these neural networks.
  • Imitation Learning: The toolkit also supports imitation learning, where agents can learn by observing and mimicking the behavior of expert demonstrations. This is implemented in the Demonstrations directory.
  • Unity Engine: The Unity game engine is used to create the 3D environments and simulations in which the agents are trained and deployed. The toolkit provides a seamless integration between the Unity environment and the Python-based training code.

The key design choices of the ML-Agents Toolkit include:

  • Modular Architecture: The toolkit is designed with a modular architecture, where different components (agents, sensors, actuators, communication, etc.) are implemented as separate, loosely coupled modules. This allows for easy extensibility and customization.
  • Abstraction Layers: The toolkit provides abstraction layers, such as the IActuator and ISensor interfaces, which allow developers to create custom actuators and sensors without needing to understand the underlying implementation details.
  • Separation of Concerns: The toolkit separates the concerns of environment management, agent behavior, and training algorithms, allowing each component to be developed and tested independently.
  • Extensibility: The toolkit is designed to be easily extensible, with support for custom side channels, reward providers, and training algorithms. Developers can add new functionality by implementing the appropriate interfaces and integrating their code with the existing framework.

Agents and Actuators Sensors and Observations Communication and Capabilities Inference and Model Execution Utility Classes and Functionality

Core Functionality
Revise

The core functionality of the Unity ML-Agents Toolkit is centered around the management of agents, sensors, actuators, communication, and various utility classes. These components work together to enable the training and deployment of reinforcement learning agents within Unity environments.

Read more

Agents and Actuators
Revise

The ActuatorManager class in the …/Actuators directory is responsible for managing a collection of IActuator objects. The IActuator interface provides an abstraction for executing actions in a Unity ML-Agents environment, including the specification of the actions, the ability to receive actions, and the ability to provide heuristic actions.

Read more

Sensors and Observations
Revise

The …/Sensors directory contains a variety of sensor implementations that are used by the Unity ML-Agents framework to gather observations from the environment. These sensors include:

Read more

Communication and Capabilities
Revise

The communication between the Unity ML-Agents environment and the external training process is facilitated through a set of protocol buffer messages defined in the …/CommunicatorObjects directory. These messages provide a standardized way to represent and transmit the data necessary for training and deploying reinforcement learning agents in Unity.

Read more

Inference and Model Execution
Revise

The ModelRunner class in the …/Inference namespace is responsible for managing the execution of a machine learning model for decision-making in Unity ML-Agents. It handles the initialization of the model, the preparation of input tensors, the execution of the model, and the application of the output tensors to the agents' actions.

Read more

Utility Classes and Functionality
Revise

The …/Runtime directory contains various utility classes and functionality used throughout the ML-Agents Toolkit. Some of the key components in this directory include:

Read more

Training Algorithms
Revise

The ML-Agents Toolkit provides several reinforcement learning algorithms that can be used to train intelligent agents in Unity environments. The key algorithms implemented in the toolkit are:

Read more

Proximal Policy Optimization (PPO)
Revise

The Proximal Policy Optimization (PPO) algorithm is a reinforcement learning algorithm implemented in the ML-Agents Toolkit. The core implementation of the PPO algorithm is located in the …/ppo directory.

Read more

Soft Actor-Critic (SAC)
Revise

The Soft Actor-Critic (SAC) algorithm is another reinforcement learning algorithm implemented in the ML-Agents Toolkit. This subsection will cover the implementation of the SAC algorithm, including the training process, optimization, and integration with the ML-Agents framework.

Read more

Proximal Optimistic Counterfactual Advantage (POCA)
Revise

The Proximal Optimistic Counterfactual Advantage (POCA) algorithm is a variant of the Proximal Policy Optimization (PPO) algorithm, designed for training multi-agent policies in the ML-Agents Toolkit. The implementation of the POCA algorithm is primarily found in the following files and directories:

Read more

Reward Providers
Revise

The ML-Agents Toolkit provides various reward providers that can be used to calculate the reward signals for reinforcement learning agents. This subsection will cover the implementation of these reward providers, including the Curiosity-based Reward Provider, the Generative Adversarial Imitation Learning (GAIL) Reward Provider, and the Random Network Distillation (RND) Reward Provider.

Read more

Behavioral Cloning
Revise

The ML-Agents Toolkit includes a Behavioral Cloning (BC) trainer that can be used in conjunction with reinforcement learning algorithms. The BCModule class, defined in the …/module.py file, is responsible for implementing the BC training process.

Read more

Environment Integration
Revise

The ML-Agents Toolkit provides functionality for managing the communication and interaction between the Python-based training code and the Unity-based environment. This includes wrappers and utilities for integrating Unity environments with external frameworks like Gym and PettingZoo.

Read more

Unity Environment Wrappers
Revise

The ML-Agents Toolkit provides a set of wrappers and utilities for integrating Unity environments with external frameworks like Gym and PettingZoo. These wrappers handle the communication and interaction between the Python-based training code and the Unity-based environment, making it easier for researchers and developers to leverage the capabilities of Unity-based environments in their projects.

Read more

Communication and Messaging
Revise

The main functionality of the communication and messaging infrastructure in the ML-Agents toolkit is to facilitate the interaction between the Python-based training code and the Unity-based environment. This is achieved through the use of protocol buffers (protobuf) and various side channel implementations.

Read more

Environment Management
Revise

The UnityEnvRegistry class in the …/unity_env_registry.py file serves as the core functionality for managing and launching Unity-based environments within the ML-Agents framework.

Read more

Example Environments
Revise

The Unity ML-Agents Toolkit provides a collection of example scenes and scripts that demonstrate the functionality of the framework, covering a wide range of game-like environments. These examples serve as a starting point for developers to explore the capabilities of the ML-Agents toolkit and learn how to create their own reinforcement learning agents.

Read more

3D Ball Agents
Revise

The Unity ML-Agents Toolkit provides two primary 3D ball agent implementations in the …/Scripts directory: Ball3DAgent and Ball3DHardAgent.

Read more

Basic Agents
Revise

The BasicActuatorComponent class is responsible for creating the BasicActuator instance, which is the main implementation of the IActuator interface in this example. The BasicActuator class has a reference to the BasicController component, which it uses to apply the agent's movement direction. The OnActionReceived() method is called when the agent receives an action, and it converts the action (a discrete value of 0, 1, or 2) into a direction of -1, 0, or 1, which is then passed to the MoveDirection() method in the BasicController class.

Read more

Crawler Agents
Revise

The CrawlerAgent class is the central component that coordinates the behavior of a crawler agent in the Unity ML-Agents framework. This class is responsible for the following key functionality:

Read more

Dungeon Escape Agents
Revise

The Dungeon Escape example in the Unity ML-Agents Toolkit showcases the integration of reinforcement learning agents within a game-like environment. The key components that define the agents and environment in this example are:

Read more

Food Collector Agents
Revise

The FoodCollectorAgent class is responsible for controlling the agent's behavior in the "Food Collector" example scene. It inherits from the Agent class provided by the Unity ML-Agents framework, which gives it access to various methods and properties for managing the agent's state and interactions with the environment.

Read more

Grid World Agents
Revise

The GridAgent class is the primary agent implementation in the Grid World example of the Unity ML-Agents Toolkit. This class is responsible for managing the agent's behavior, including collecting observations, processing actions, and providing rewards within the grid-based environment.

Read more

Hallway Agents
Revise

The HallwayAgent class is responsible for the core functionality of the agent in the Hallway example scene of the Unity ML-Agents toolkit. This class handles the agent's movement, collision detection, reward calculation, and episode management.

Read more

Match-3 Agents
Revise

The Match3Agent class is responsible for implementing the game logic of a Match-3 game for a reinforcement learning agent. It interacts with the Match3Board class, which manages the game board and the state of the cells.

Read more

Push Block Agents
Revise

The "Push Block" example in the Unity ML-Agents project features several key classes that work together to create the game environment and agent behavior:

Read more

Push Block with Input Agents
Revise

The "Push Block with Input" example in the Unity ML-Agents Toolkit demonstrates the integration of player input with reinforcement learning agents. The key components in this example are:

Read more

Pyramid Agents
Revise

The PyramidAgent class is the main entry point for the agent's behavior in the pyramid-building environment. It handles the agent's movement, observation collection, action processing, and episode initialization.

Read more

Soccer Agents
Revise

The AgentSoccer class is responsible for controlling the behavior of the individual soccer agents in the Unity ML-Agents soccer environment. It defines the agent's team (Blue or Purple) and position (Striker, Goalie, or Generic), and handles the agent's movement and reward calculation.

Read more

Sorter Agents
Revise

The NumberTile class in the …/NumberTile.cs file represents a single tile in the Sorter example. The class has the following key functionality:

Read more

Wall Jump Agents
Revise

The WallJumpAgent class is responsible for controlling the behavior of the agent in the "Wall Jump" example of the Unity ML-Agents Toolkit. This class handles the agent's movement, jumping, and interaction with the environment, including the detection of collisions with the ground and the goal.

Read more

Worm Agents
Revise

The WormAgent class is the central component responsible for the implementation of a worm-like agent in the Unity ML-Agents framework. This class inherits from the Agent class provided by the ML-Agents toolkit and manages the agent's body parts, observations, actions, and rewards.

Read more

Walker Agents
Revise

The WalkerAgent class in the …/WalkerAgent.cs file is the core implementation of a walking agent in the Unity ML-Agents framework. The agent is a ragdoll-based character with various body parts, and the goal is to train the agent to walk at a target speed while maintaining balance and orientation towards the target.

Read more

Localized Documentation
Revise

References: localized_docs

The localized_docs directory contains documentation and installation guides for the ML-Agents Toolkit, a Unity-based framework for training intelligent agents using reinforcement learning. The key functionality covered in this directory includes:

Read more

Korean Documentation
Revise

The …/docs directory contains the Korean translation of the documentation and installation guides for the Unity ML-Agents Toolkit. This subsection covers the key functionality and resources available in this directory.

Read more

Installation and Setup
Revise

The ML-Agents Toolkit provides detailed instructions for installing and setting up the framework on various platforms, including Windows, Mac, and Unix, using Anaconda and other methods.

Read more

Training Algorithms
Revise

The Unity ML-Agents Toolkit provides two main training algorithms: Proximal Policy Optimization (PPO) and Imitation Learning using Behavioral Cloning.

Read more

Docker Integration
Revise

The ML-Agents Toolkit provides functionality for running Unity-based environments within a Docker container, allowing users to train agents without the need to install Python and TensorFlow directly on their system.

Read more

Russian Documentation
Revise

References: localized_docs/RU

The …/RU directory contains documentation files in Russian that provide guidance and instructions for using the ML-Agents Toolkit in Unity. The key files and subdirectories in this directory are:

Read more

Overview
Revise

The Unity ML-Agents Toolkit is an open-source project that enables the training of intelligent agents through interaction with environments, games, or simulations using various machine learning techniques. The toolkit supports deep reinforcement learning algorithms such as Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC), as well as imitation learning methods like Behavioral Cloning.

Read more

Getting Started
Revise

References: ml-agents

The com.unity.ml-agents.extensions directory contains a set of extensions and additional functionality for the Unity ML-Agents Toolkit. The key components in this directory are:

Read more

Turkish Documentation
Revise

The …/docs directory contains the Turkish translation of the documentation for the Unity ML-Agents Toolkit. The key files and functionality in this directory are:

Read more

Getting Started
Revise

The "Getting Started" subsection provides a comprehensive guide for setting up and using the ML-Agents Toolkit in Unity, covering installation, running pre-trained models, training new models, and deployment.

Read more

Installation
Revise

The ML-Agents Toolkit provides a comprehensive installation process for both the Unity package and the Python packages. This subsection covers the detailed instructions for setting up the toolkit on your system.

Read more

Chinese Documentation
Revise

The …/docs directory contains documentation files that provide an overview of the Unity ML-Agents toolkit, including installation instructions, environment creation, training processes, and various example environments.

Read more

Learning Environment Creation
Revise

The process of creating a new learning environment in Unity using the ML-Agents toolkit involves implementing the Agent, Brain, and Academy components. These components work together to define the agent's behavior, the decision-making process, and the overall coordination of the simulation.

Read more

Learning Environment Design
Revise

The Unity ML-Agents Toolkit provides a comprehensive set of tools and APIs for creating, training, and deploying reinforcement learning agents in Unity environments. The core of the toolkit is the Academy class, which orchestrates the simulation and training process.

Read more

ML-Agents Overview
Revise

The Unity ML-Agents Toolkit provides a comprehensive framework for training intelligent agents using deep reinforcement learning. The toolkit's key components include the learning environment, Python API, and External Communicator, which work together to enable training and prediction of agent behaviors.

Read more