d3rlpy.online.buffers.ReplayBuffer¶
-
class
d3rlpy.online.buffers.
ReplayBuffer
(maxlen, env, as_tensor=False, device=None)[source]¶ Standard Replay Buffer.
Parameters: -
prev_observation
¶ previously appended observation.
Type: numpy.ndarray
-
prev_action
¶ previously appended action.
Type: numpy.ndarray or int
-
prev_transition
¶ previously appended transition.
Type: d3rlpy.dataset.Transition
-
transitions
¶ list of transitions.
Type: collections.deque
-
device
¶ gpu device.
Type: d3rlpy.gpu.Device
Methods
-
append
(observation, action, reward, terminal)[source]¶ Append observation, action, reward and terminal flag to buffer.
If the terminal flag is True, Monte-Carlo returns will be computed with an entire episode and the whole transitions will be appended.
Parameters: - observation (numpy.ndarray) – observation.
- action (numpy.ndarray or int) – action.
- reward (float) – reward.
- terminal (bool or float) – terminal flag.
-
sample
(batch_size, n_frames=1)[source]¶ Returns sampled mini-batch of transitions.
If observation is image, you can stack arbitrary frames via
n_frames
.buffer.observation_shape == (3, 84, 84) # stack 4 frames batch = buffer.sample(batch_size=32, n_frames=4) batch.observations.shape == (32, 12, 84, 84)
Parameters: Returns: mini-batch.
Return type:
-