pettingzoo_env#


class PettingZooEnv(env: BaseWrapper)[source]#

Bases: AECEnv, ABC

The interface for petting zoo environments.

Multi-agent environments must be wrapped as PettingZooEnv. Here is the usage:

env = PettingZooEnv(...)
# obs is a dict containing obs, agent_id, and mask
obs = env.reset()
action = policy(obs)
obs, rew, trunc, term, info = env.step(action)
env.close()

The available action’s mask is set to True, otherwise it is set to False. Further usage can be found at Multi-Agent Reinforcement Learning.

reset(*args: Any, **kwargs: Any) tuple[dict, dict][source]#

Resets the environment to a starting state.

step(action: Any) tuple[dict, list[int], bool, bool, dict][source]#

Accepts and executes the action of the current agent_selection in the environment.

Automatically switches control to the next agent.

close() None[source]#

Closes any resources that should be released.

Closes the rendering window, subprocesses, network connections, or any other resources that should be released.

seed(seed: Any = None) None[source]#
render() Any[source]#

Renders the environment as specified by self.render_mode.

Render mode can be human to display a window. Other render modes in the default environments are ‘rgb_array’ which returns a numpy array and is supported by all environments outside of classic, and ‘ansi’ which returns the strings printed (specific to classic environments).