View source on GitHub |
A Python-based replay buffer with optimized underlying storage.
Inherits From: PyUniformReplayBuffer
, ReplayBuffer
tf_agents.replay_buffers.py_hashed_replay_buffer.PyHashedReplayBuffer(
data_spec, capacity, log_interval=None
)
This replay buffer deduplicates data in the stored trajectories along the last axis of the observation, which is useful, e.g., if you are performing something like frame stacking. For example, if each observation is 4 stacked 84x84 grayscale images forming a shape [84, 84, 4], then the replay buffer will separate out each of the images and depuplicate across each trajectory in case an image is repeated.
Args | |
---|---|
data_spec
|
An ArraySpec or a list/tuple/nest of ArraySpecs describing a single item that can be stored in this buffer. |
capacity
|
The maximum number of items that can be stored in the buffer. |
Methods
add_batch
add_batch(
items
)
Adds a batch of items to the replay buffer.
Args | |
---|---|
items
|
An item or list/tuple/nest of items to be added to the replay
buffer. items must match the data_spec of this class, with a
batch_size dimension added to the beginning of each tensor/array.
|
Returns | |
---|---|
Adds items to the replay buffer.
|
as_dataset
as_dataset(
sample_batch_size=None,
num_steps=None,
num_parallel_calls=None,
sequence_preprocess_fn=None,
single_deterministic_pass=False
)
Creates and returns a dataset that returns entries from the buffer.
A single entry from the dataset is the result of the following pipeline:
- Sample sequences from the underlying data store
- (optionally) Process them with
sequence_preprocess_fn
, - (optionally) Split them into subsequences of length
num_steps
- (optionally) Batch them into batches of size
sample_batch_size
.
In practice, this pipeline is executed in parallel as much as possible
if num_parallel_calls != 1
.
Some additional notes:
If num_steps is None
, different replay buffers will behave differently.
For example, TFUniformReplayBuffer
will return single time steps without
a time dimension. In contrast, e.g., EpisodicReplayBuffer
will return
full sequences (since each sequence may be an episode of unknown length,
the outermost shape dimension will be None
).
If sample_batch_size is None
, no batching is performed; and there is no
outer batch dimension in the returned Dataset entries. This setting
is useful with variable episode lengths using e.g. EpisodicReplayBuffer
,
because it allows the user to get full episodes back, and use tf.data
to build padded or truncated batches themselves.
If single_deterministic_pass == True
, the replay buffer will make
every attempt to ensure every time step is visited once and exactly once
in a deterministic manner (though true determinism depends on the
underlying data store). Additional work may be done to ensure minibatches
do not have multiple rows from the same episode. In some cases, this
may mean arguments like num_parallel_calls
are ignored.
Args | |
---|---|
sample_batch_size
|
(Optional.) An optional batch_size to specify the number of items to return. If None (default), a single item is returned which matches the data_spec of this class (without a batch dimension). Otherwise, a batch of sample_batch_size items is returned, where each tensor in items will have its first dimension equal to sample_batch_size and the rest of the dimensions match the corresponding data_spec. |
num_steps
|
(Optional.) Optional way to specify that sub-episodes are desired. If None (default), a batch of single items is returned. Otherwise, a batch of sub-episodes is returned, where a sub-episode is a sequence of consecutive items in the replay_buffer. The returned tensors will have first dimension equal to sample_batch_size (if sample_batch_size is not None), subsequent dimension equal to num_steps, and remaining dimensions which match the data_spec of this class. |
num_parallel_calls
|
(Optional.) A tf.int32 scalar tf.Tensor ,
representing the number elements to process in parallel. If not
specified, elements will be processed sequentially.
|
sequence_preprocess_fn
|
(Optional) fn for preprocessing the collected data
before it is split into subsequences of length num_steps . Defined in
TFAgent.preprocess_sequence . Defaults to pass through.
|
single_deterministic_pass
|
Python boolean. If True , the dataset will
return a single deterministic pass through its underlying data.
NOTE: If the buffer is modified while a Dataset iterator is
iterating over this data, the iterator may miss any new data or
otherwise have subtly invalid data.
|
Returns | |
---|---|
A dataset of type tf.data.Dataset, elements of which are 2-tuples of:
|
Raises | |
---|---|
NotImplementedError
|
If a non-default argument value is not supported. |
ValueError
|
If the data spec contains lists that must be converted to tuples. |
clear
clear()
Resets the contents of replay buffer.
Returns | |
---|---|
Clears the replay buffer contents. |
gather_all
gather_all()
Returns all the items in buffer. (deprecated)
Returns | |
---|---|
Returns all the items currently in the buffer. Returns a tensor of shape [B, T, ...] where B = batch size, T = timesteps, and the remaining shape is the shape spec of the items in the buffer. |
get_next
get_next(
sample_batch_size=None, num_steps=None, time_stacked=True
)
Returns an item or batch of items from the buffer. (deprecated)
Args | |
---|---|
sample_batch_size
|
(Optional.) An optional batch_size to specify the number of items to return. If None (default), a single item is returned which matches the data_spec of this class (without a batch dimension). Otherwise, a batch of sample_batch_size items is returned, where each tensor in items will have its first dimension equal to sample_batch_size and the rest of the dimensions match the corresponding data_spec. See examples below. |
num_steps
|
(Optional.) Optional way to specify that sub-episodes are desired. If None (default), in non-episodic replay buffers, a batch of single items is returned. In episodic buffers, full episodes are returned (note that sample_batch_size must be None in that case). Otherwise, a batch of sub-episodes is returned, where a sub-episode is a sequence of consecutive items in the replay_buffer. The returned tensors will have first dimension equal to sample_batch_size (if sample_batch_size is not None), subsequent dimension equal to num_steps, if time_stacked=True and remaining dimensions which match the data_spec of this class. See examples below. |
time_stacked
|
(Optional.) Boolean, when true and num_steps > 1 it returns the items stacked on the time dimension. See examples below for details. Examples of tensor shapes returned: (B = batch size, T = timestep, D = data spec) get_next(sample_batch_size=None, num_steps=None, time_stacked=True) return shape (non-episodic): [D] return shape (episodic): T, D get_next(sample_batch_size=B, num_steps=None, time_stacked=True) return shape (non-episodic): [B, D] return shape (episodic): Not supported get_next(sample_batch_size=B, num_steps=T, time_stacked=True) return shape: [B, T, D] get_next(sample_batch_size=None, num_steps=T, time_stacked=False) return shape: ([D], [D], ..) T tensors in the tuple get_next(sample_batch_size=B, num_steps=T, time_stacked=False) return shape: ([B, D], [B, D], ..) T tensors in the tuple |
Returns | |
---|---|
A 2-tuple containing:
|
num_frames
num_frames()
Returns the number of frames in the replay buffer.