Core#
gym.Env#
- gym.Env.step(self, action: ActType) Tuple[ObsType, float, bool, bool, dict] #
Run one timestep of the environment’s dynamics.
When end of episode is reached, you are responsible for calling
reset()
to reset this environment’s state. Accepts an action and returns either a tuple (observation, reward, terminated, truncated, info).- Parameters:
action (ActType) – an action provided by the agent
- Returns:
observation (object) – this will be an element of the environment’s
observation_space
. This may, for instance, be a numpy array containing the positions and velocities of certain objects.reward (float) – The amount of reward returned as a result of taking the action.
terminated (bool) – whether a terminal state (as defined under the MDP of the task) is reached. In this case further step() calls could return undefined results.
truncated (bool) – whether a truncation condition outside the scope of the MDP is satisfied. Typically a timelimit, but could also be used to indicate agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached.
info (dictionary) – info contains auxiliary diagnostic information (helpful for debugging, learning, and logging). This might, for instance, contain: metrics that describe the agent’s performance state, variables that are hidden from observations, or individual reward terms that are combined to produce the total reward. It also can contain information that distinguishes truncation and termination, however this is deprecated in favour of returning two booleans, and will be removed in a future version.
(deprecated)
done (bool) – A boolean value for if the episode has ended, in which case further
step()
calls will return undefined results. A done signal may be emitted for different reasons: Maybe the task underlying the environment was solved successfully, a certain timelimit was exceeded, or the physics simulation has entered an invalid state.
- gym.Env.reset(self, *, seed: Optional[int] = None, options: Optional[dict] = None) Tuple[ObsType, dict] #
Resets the environment to an initial state and returns the initial observation.
This method can reset the environment’s random number generator(s) if
seed
is an integer or if the environment has not yet initialized a random number generator. If the environment already has a random number generator andreset()
is called withseed=None
, the RNG should not be reset. Moreover,reset()
should (in the typical use case) be called with an integer seed right after initialization and then never again.- Parameters:
seed (optional int) – The seed that is used to initialize the environment’s PRNG. If the environment does not already have a PRNG and
seed=None
(the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG andseed=None
is passed, the PRNG will not be reset. If you pass an integer, the PRNG will be reset even if it already exists. Usually, you want to pass an integer right after the environment has been initialized and then never again. Please refer to the minimal example above to see this paradigm in action.options (optional dict) – Additional information to specify how the environment is reset (optional, depending on the specific environment)
- Returns:
observation (object) – Observation of the initial state. This will be an element of
observation_space
(typically a numpy array) and is analogous to the observation returned bystep()
.info (dictionary) – This dictionary contains auxiliary information complementing
observation
. It should be analogous to theinfo
returned bystep()
.
- gym.Env.render(self) Optional[Union[RenderFrame, List[RenderFrame]]] #
Compute the render frames as specified by render_mode attribute during initialization of the environment.
The set of supported modes varies per environment. (And some third-party environments may not support rendering at all.) By convention, if render_mode is:
None (default): no render is computed.
human: render return None. The environment is continuously rendered in the current display or terminal. Usually for human consumption.
rgb_array: return a single frame representing the current state of the environment. A frame is a numpy.ndarray with shape (x, y, 3) representing RGB values for an x-by-y pixel image.
rgb_array_list: return a list of frames representing the states of the environment since the last reset. Each frame is a numpy.ndarray with shape (x, y, 3), as with rgb_array.
ansi: Return a strings (str) or StringIO.StringIO containing a terminal-style text representation for each time step. The text can include newlines and ANSI escape sequences (e.g. for colors).
Note
Make sure that your class’s metadata ‘render_modes’ key includes the list of supported modes. It’s recommended to call super() in implementations to use the functionality of this method.
Attributes#
- Env.action_space: Space[ActType]#
This attribute gives the format of valid actions. It is of datatype Space provided by Gym. For example, if the action space is of type Discrete and gives the value Discrete(2), this means there are two valid discrete actions: 0 & 1.
>>> env.action_space Discrete(2) >>> env.observation_space Box(-3.4028234663852886e+38, 3.4028234663852886e+38, (4,), float32)
- Env.observation_space: Space[ObsType]#
This attribute gives the format of valid observations. It is of datatype
Space
provided by Gym. For example, if the observation space is of typeBox
and the shape of the object is(4,)
, this denotes a valid observation will be an array of 4 numbers. We can check the box bounds as well with attributes.>>> env.observation_space.high array([4.8000002e+00, 3.4028235e+38, 4.1887903e-01, 3.4028235e+38], dtype=float32) >>> env.observation_space.low array([-4.8000002e+00, -3.4028235e+38, -4.1887903e-01, -3.4028235e+38], dtype=float32)
- Env.reward_range = (-inf, inf)#
This attribute is a tuple corresponding to min and max possible rewards. Default range is set to
(-inf,+inf)
. You can set it if you want a narrower range.
Additional Methods#
gym.Wrapper#
- class gym.Wrapper(env: Env)#
Wraps an environment to allow a modular transformation of the
step()
andreset()
methods.This class is the base class for all wrappers. The subclass could override some methods to change the behavior of the original environment without touching the original code.
Note
Don’t forget to call
super().__init__(env)
if the subclass overrides__init__()
.
gym.ObservationWrapper#
- class gym.ObservationWrapper(env: Env)#
Superclass of wrappers that can modify observations using
observation()
forreset()
andstep()
.If you would like to apply a function to the observation that is returned by the base environment before passing it to learning code, you can simply inherit from
ObservationWrapper
and overwrite the methodobservation()
to implement that transformation. The transformation defined in that method must be defined on the base environment’s observation space. However, it may take values in a different space. In that case, you need to specify the new observation space of the wrapper by settingself.observation_space
in the__init__()
method of your wrapper.For example, you might have a 2D navigation task where the environment returns dictionaries as observations with keys
"agent_position"
and"target_position"
. A common thing to do might be to throw away some degrees of freedom and only consider the position of the target relative to the agent, i.e.observation["target_position"] - observation["agent_position"]
. For this, you could implement an observation wrapper like this:class RelativePosition(gym.ObservationWrapper): def __init__(self, env): super().__init__(env) self.observation_space = Box(shape=(2,), low=-np.inf, high=np.inf) def observation(self, obs): return obs["target"] - obs["agent"]
Among others, Gym provides the observation wrapper
TimeAwareObservation
, which adds information about the index of the timestep to the observation.
gym.RewardWrapper#
- class gym.RewardWrapper(env: Env)#
Superclass of wrappers that can modify the returning reward from a step.
If you would like to apply a function to the reward that is returned by the base environment before passing it to learning code, you can simply inherit from
RewardWrapper
and overwrite the methodreward()
to implement that transformation. This transformation might change the reward range; to specify the reward range of your wrapper, you can simply defineself.reward_range
in__init__()
.Let us look at an example: Sometimes (especially when we do not have control over the reward because it is intrinsic), we want to clip the reward to a range to gain some numerical stability. To do that, we could, for instance, implement the following wrapper:
class ClipReward(gym.RewardWrapper): def __init__(self, env, min_reward, max_reward): super().__init__(env) self.min_reward = min_reward self.max_reward = max_reward self.reward_range = (min_reward, max_reward) def reward(self, reward): return np.clip(reward, self.min_reward, self.max_reward)
gym.ActionWrapper#
- class gym.ActionWrapper(env: Env)#
Superclass of wrappers that can modify the action before
env.step()
.If you would like to apply a function to the action before passing it to the base environment, you can simply inherit from
ActionWrapper
and overwrite the methodaction()
to implement that transformation. The transformation defined in that method must take values in the base environment’s action space. However, its domain might differ from the original action space. In that case, you need to specify the new action space of the wrapper by settingself.action_space
in the__init__()
method of your wrapper.Let’s say you have an environment with action space of type
gym.spaces.Box
, but you would only like to use a finite subset of actions. Then, you might want to implement the following wrapper:class DiscreteActions(gym.ActionWrapper): def __init__(self, env, disc_to_cont): super().__init__(env) self.disc_to_cont = disc_to_cont self.action_space = Discrete(len(disc_to_cont)) def action(self, act): return self.disc_to_cont[act] if __name__ == "__main__": env = gym.make("LunarLanderContinuous-v2") wrapped_env = DiscreteActions(env, [np.array([1,0]), np.array([-1,0]), np.array([0,1]), np.array([0,-1])]) print(wrapped_env.action_space) #Discrete(4)
Among others, Gym provides the action wrappers
ClipAction
andRescaleAction
.