Vectorized Environment#

In reinforcement learning, the agent interacts with environments to improve itself. In this tutorial we will concentrate on the environment part. Although there are many kinds of environments or their libraries in DRL research, Tianshou chooses to keep a consistent API with OPENAI Gym.

In Gym, an environment receives an action and returns next observation and reward. This process is slow and sometimes can be the throughput bottleneck in a DRL experiment.

Tianshou provides vectorized environment wrapper for a Gym environment. This wrapper allows you to make use of multiple cpu cores in your server to accelerate the data sampling.

Hide code cell content
%%capture

import time

import gymnasium as gym
import numpy as np

from tianshou.env import DummyVectorEnv, SubprocVectorEnv
num_cpus = [1, 2, 5]
for num_cpu in num_cpus:
    env = SubprocVectorEnv([lambda: gym.make("CartPole-v1") for _ in range(num_cpu)])
    env.reset()
    sampled_steps = 0
    time_start = time.time()
    while sampled_steps < 1000:
        act = np.random.choice(2, size=num_cpu)
        obs, rew, terminated, truncated, info = env.step(act)
        if np.sum(terminated):
            env.reset(np.where(terminated)[0])
        sampled_steps += num_cpu
    time_used = time.time() - time_start
    print(f"{time_used}s used to sample 1000 steps if using {num_cpu} cpus.")
0.2902035713195801s used to sample 1000 steps if using 1 cpus.
0.18926119804382324s used to sample 1000 steps if using 2 cpus.
0.15875506401062012s used to sample 1000 steps if using 5 cpus.

You may notice that the speed doesn’t increase linearly when we add subprocess numbers. There are multiple reasons behind this. One reason is that synchronize exception causes straggler effect. One way to solve this would be to use asynchronous mode. We leave this for further reading if you feel interested.

Note that SubprocVectorEnv should only be used when the environment execution is slow. In practice, DummyVectorEnv (or raw Gym environment) is actually more efficient for a simple environment like CartPole because now you avoid both straggler effect and the overhead of communication between subprocesses.

Usages#

Initialization#

Just pass in a list of functions which return the initialized environment upon called.

# In Gym
gym_env = gym.make("CartPole-v1")


# In Tianshou
def create_cartpole_env() -> gym.Env:
    return gym.make("CartPole-v1")


# We can distribute the environments on the available cpus, which we assume to be 5 in this case
vector_env = DummyVectorEnv([create_cartpole_env for _ in range(5)])

EnvPool supporting#

Besides integrated environment wrappers, Tianshou also fully supports EnvPool. Explore its Github page yourself.

Environment execution and resetting#

The only difference between Vectorized environments and standard Gym environments is that passed in actions and returned rewards/observations are also vectorized.

# In gymnasium, env.reset() returns an observation, info tuple
print("In Gym, env.reset() returns a single observation.")
print(gym_env.reset())

# In Tianshou, envs.reset() returns stacked observations.
print("========================================")
print("In Tianshou, a VectorEnv's reset() returns stacked observations.")
print(vector_env.reset())

info = vector_env.step(np.random.choice(2, size=vector_env.env_num))[4]
print(info)
In Gym, env.reset() returns a single observation.
(array([-0.01770119,  0.00036755,  0.03005975, -0.03225723], dtype=float32), {})
========================================
In Tianshou, a VectorEnv's reset() returns stacked observations.
(array([[-0.01207173,  0.040539  , -0.00702054,  0.01818928],
       [-0.02597509,  0.01284341, -0.04365795,  0.02661148],
       [ 0.04171089, -0.00360391,  0.02916941, -0.00388824],
       [ 0.01685583,  0.03310945,  0.02530817, -0.04344835],
       [ 0.00115361,  0.02883396, -0.00903944, -0.00991832]],
      dtype=float32), array([{}, {}, {}, {}, {}], dtype=object))
[{'env_id': 0} {'env_id': 1} {'env_id': 2} {'env_id': 3} {'env_id': 4}]

If we only want to execute several environments. The id argument can be used.

info = vector_env.step(np.random.choice(2, size=3), id=[0, 3, 1])[4]
print(info)
[{'env_id': 0} {'env_id': 3} {'env_id': 1}]

Further Reading#

Other environment wrappers in Tianshou#

  • ShmemVectorEnv: use share memory instead of pipe based on SubprocVectorEnv;

  • RayVectorEnv: use Ray for concurrent activities and is currently the only choice for parallel simulation in a cluster with multiple machines.

Check the documentation for details.

Difference between synchronous and asynchronous mode (How to choose?)#

Explanation can be found at the Parallel Sampling tutorial.