Release Notes#


Released on 2024-01-27 - GitHub - PyPI

Minari 0.4.3 Release Notes

New Contributors

Full Changelog: v0.4.2...v0.4.3


Released on 2023-10-09 - GitHub - PyPI

Minari 0.4.2 Release Notes

New Contributors

Full Changelog: v0.4.1...v0.4.2


Released on 2023-07-19 - GitHub - PyPI

v0.4.1 Release Notes

Bugfix: Adds packaging as a dependency for Minari in #121.


Released on 2023-07-19 - GitHub - PyPI

v0.4.0 Release Notes

Important changes from this PR include the move away from observation and action flattening, the move to explicitly and fully support Dict, Tuple, Box, Discrete, and Text spaces, and the move to explicit dataset versioning. Additionally, we have added support for using a subset of an environment's action/observation spaces when creating a dataset.

Finally, we have released new versions of each dataset to make them compliant with our new dataset format. This includes all the changes listed in the "Dataset Updates" section of the release notes.

We have two new tutorials:

Unflattened Dict and Tuple space support

The following exerpt from our documentation shows how unflattened gymnasium.spaces.Dict and gymnasium.spaces.Tuple are now supported.

In the case where, the observation space is a relatively complex Dict space with the following definition:

        "component_1": spaces.Box(low=-1, high=1, dtype=np.float32),
        "component_2": spaces.Dict(
                "subcomponent_1": spaces.Box(low=2, high=3, dtype=np.float32),
                "subcomponent_2": spaces.Box(low=4, high=5, dtype=np.float32),

and the action space is Box space, the resulting HDF5 file will end up looking as follows:

📄 main_data.hdf5
├ 📁 episode_0
│  ├ 📁 observations
│  │  ├ 💾 component_1
│  │  └ 📁 component_2
│  │     ├ 💾 subcomponent_1
│  │     └ 💾 subcomponent_2
│  ├ 💾 actions
│  ├ 💾 terminations
│  ├ 💾 truncations
│  ├ 💾 rewards
│  ├ 📁 infos
│  │  ├ 💾 infos_datasets
│  │  └ 📁 infos_subgroup
│  │     └ 💾 more_datasets
│  └ 📁 additional_groups
│     └ 💾 additional_datasets
├ 📁 episode_1
├ 📁 episode_2

└ 📁 last_episode_id

Similarly, consider the case where we have a Box space as an observation space and a relatively complex Tuple space as an action space with the following definition:

        spaces.Box(low=2, high=3, dtype=np.float32),
                spaces.Box(low=2, high=3, dtype=np.float32),
                spaces.Box(low=4, high=5, dtype=np.float32),

In this case, the resulting Minari dataset HDF5 file will end up looking as follows:

📄 main_data.hdf5
├ 📁 episode_0
│  ├ 💾 observations
│  ├ 📁 actions
│  │  ├ 💾 _index_0
│  │  └ 📁 _index_1
│  │     ├ 💾 _index_0
│  │     └ 💾 _index_0
│  ├ 💾 terminations
│  ├ 💾 truncations
│  ├ 💾 rewards
│  ├ 📁 infos
│  │  ├ 💾 infos_datasets
│  │  └ 📁 infos_subgroup
│  │     └ 💾 more_datasets
│  └ 📁 additional_groups
│     └ 💾 additional_datasets
├ 📁 episode_1
├ 📁 episode_2

└ 📁 last_episode_id

EpisodeData: Data format when sampling episodes

Episodes are now sampled as EpisodeData Instances that comply with the following format:

Field Type Description
id np.int64 ID of the episode.
seed np.int64 Seed used to reset the episode.
total_timesteps np.int64 Number of timesteps in the episode.
observations np.ndarray, list, tuple, dict Observations for each timestep including initial observation.
actions np.ndarray, list, tuple, dict Actions for each timestep.
rewards np.ndarray Rewards for each timestep.
terminations np.ndarray Terminations for each timestep.
truncations np.ndarray Truncations for each timestep.

Breaking Changes

New Features

  • Added Python 3.11 support by @rodrigodelazcano in #73
  • Reorganized tests and added more thorough testing of MinariStorage by @balisujohn in #75
  • Added option to copy data(instead of reference based copy) when combining datasets by @Howuhh in #82
  • Fail with a descriptive error if dataset env base library not installed by @shreyansjainn in #86
  • Made EpisodeData into a dataclass by @younik in #88
  • Added force_download flag for locally existing datasets by @shreyansjainn in #90
  • Added support for text spaces by @younik in #99
  • Added minari.get_normalized_scores that follows the evaluation process in the D4RL datasets. by @rodrigodelazcano in #110
  • Added code to support Minari dataset version specifiers by @rodrigodelazcano in #107

Bug Fixes

  • Fixed path printing in the CLI(previously incorrect) by @Howuhh in #83
  • Copy infos from the previous episode if truncated or terminated without reset by @Howuhh in #96
  • Ignore hidden files when listing local datasets by @enerrio #104
  • h5py group creation bug fix by @rodrigodelazcano in #111

Documentation Changes

  • Adding a table describing supported action and oibservation spaces by @tohsin in #84
  • Adds test instructions to contributing.MD by @shreyansjainn in #86
  • Adds installation instructions to basic usage section of doc and also doc build instructions to documentation by @enerrio in #105
  • Added tutorial for space subsetting by @Bamboofungus in #108
  • Added description of EpisodeData to documentation by @enerrio in #109
  • Improved background about PID control in pointmaze dataset creation tutorial by @tohsin in #95
  • Docs now show serialized dataset spaces by @rodrigodelazcano in #116
  • Adds behavior cloning tutorial with Minari and PyTorchDataLoader by @younik in #102

Misc Changes

Dataset Updates

v1 versions of each provided dataset have been released and new dataset format has the following changes.

  • Observation and action flattening have been removed for pointmaze datasets, as arbitrary nesting of Dict and Tuple spaces is now supported with the new dataset format.
  • v1 and subsequent datasets now have action_space and observation_space fields which store a serialized representation of the observation and action spaces used for observations and actions in the dataset. It's important to note that this can be different from the spaces of the gymnasium environment mentioned in the dataset spec.
  • v1 and subsequent datasets have the minari_version field which specify with which versions of Minari they are compatible.
  • v1 pointmaze datasets copies the last info to the next episode as fixed in #96


Released on 2023-05-19 - GitHub - PyPI

v0.3.1 Release notes

Minor release for fixing the following bugs:

  • Fix combining multiple datasets . Use the h5py method dataset.attrs.modify() to update the "author" and "author_email" metadata attributes. Also added CI tests. @Howuhh in #60
  • Fix .github/workflows/build-docs-version.yml. The workflow was missing to run the dataset documentation generation file python docs/_scripts/ as well as the SPHINX_GITHUB_CHANGELOG_TOKEN environment variable. @rodrigodelazcano in #71

Full Changelog: v0.3.0...v0.3.1


Released on 2023-05-17 - GitHub - PyPI

v0.3.0: Minari is ready for testing

Minari 0.3.0 Release Notes:

For this beta release Minari has experienced considerable changes from its past v0.2.2 version. As a major refactor, the C source code and Cython dependency have been removed in favor of a pure Python API in order to reduce code complexity. If we require a more efficient API in the future we will explore the use of C.

Apart from the API changes and new features we are excited to include the first official Minari datasets which have been re-created from the D4RL project.

The documentation page at, has also been updated with the latest changes.

We are constantly developing this library. Please don't hesitate to open a GitHub issue or reach out to us directly. Your ideas and contributions are highly appreciated and will help shape the future of this library. Thank you for using our library!

New Features and Improvements

Dataset File Format

We are keeping the HDF5 file format to store the Minari datasets. However, the internal structure of the datasets has been modified. The data is now stored in a per episode basis. Each Minari dataset has a minimum of one HDF5 file (:page_facing_up:, main_data.hdf5). In the dataset file, the collected transitions are separated by episode groups (:file_folder:) that contain 5 required datasets(:floppy_disk:) : observations, actions, terminations, truncations, and rewards. Other optional group and dataset collections can be included in each episode; such is the case of the infos step return. This structure allows us to store metadata for each episode.

📄 main_data.hdf5
├ 📁 episode_id
│  ├ 💾 observations
│  ├ 💾 actions
│  ├ 💾 terminations
│  ├ 💾 truncations
│  ├ 💾 rewards
│  ├ 📁 infos
│  │  ├ 💾 info datasets
│  │  └ 📁 info subgroup
│  │     └ 💾 info subgroup dataset
│  └ 📁 extra dataset group
│     └ 💾 extra datasets
└ 📁 next_episode_id


When loading a dataset, the MinariDataset object now delegates the HDF5 file access to a MinariStorage object. The MinariDataset provides new methods (MinariDataset.sample_episodes()(#34) and MinariDataset.iterate_episodes()(#54)) to retrieve EpisodeData from the available episode indices in the dataset.

NOTE: for now the user is in charge of creating their own replay buffers with the provided episode sampling methods. We are currently working on creating standard replay buffers (#55) and making Minari datasets compatible with other learning Offline RL libraries.

The available episode indices can be filtered using metadata or other information from the episodes HDF5 datasets with MinariDataset.filter_episodes(condition: Callable[[h5py.Group], bool])(#34).

dataset = minari.load_dataset("door-human-v0")

print(f'TOTAL EPISODES ORIGINAL DATASET: {dataset.total_episodes}')

# get episodes with mean reward greater than 2
filter_dataset = dataset.filter_episodes(lambda episode: episode["rewards"].attrs.get("mean") > 2)

print(f'TOTAL EPISODES FILTER DATASET: {filter_dataset.total_episodes}')

The episodes in a MinariDataset can also be splitted into smaller sub-datasets with minari.split_dataset(dataset: MinariDataset, sizes: List[int], seed: int | None = None)(#34).

dataset = minari.load_dataset("door-human-v0")

split_datasets = minari.split_dataset(dataset, sizes=[20, 5], seed=123)

print(f'TOTAL EPISODES FIRST SPLIT: {split_datasets[0].total_episodes}')
print(f'TOTAL EPISODES SECOND SPLIT: {split_datasets[1].total_episodes}')

Finally, Gymnasium release v0.28.0 made possible the conversion of the environment's EnvSpec to a json dictionary. This allowed Minari to "safe" the description of the environment used to generate the dataset into the HDF5 file for later recovery through: MinariDataset.recover_environment() (#31). NOTE: the entry_point of the environment must be available, i.e. to recover the environment from door-human-v0 dataset, the gymnasium-robotics library must be installed.

Dataset Creation (#31)

We are facilitating the logging of environment data by providing a Gymnasium environment wrapper, DataCollectorV0. This wrapper buffers the parameters from a Gymnasium step transition. The DataCollectorV0 is also memory efficient by providing a step/episode scheduler to cache the recorded data. In addition, this wrapper can be initialized with two custom callbacks:

  • StepDataCallback - This callback automatically flattens Dictionary or Tuple observation/action spaces (this functionality will be removed in a future release following the suggestions of #57). This class can be overridden to store additional environment data.

  • EpisodeMetadataCallback - This callback adds metadata to each recorded episode. For now automatic metadata will be added to the rewards dataset of each episode. It can also be overridden to include additional metadata.

To save the Minari dataset in disk with a specific dataset id two functions are provided. If the data is collected by wrapping the environment with a DataCollectorV0, use minari.create_dataset_from_collector_env. Otherwise you can collect the episode trajectories with dictionary collection buffers and use minari.create_dataset_from_buffers.

This functions return a MinariDataset object which can be used to checkpoint the data collection process to later append more data with
MinariDataset.update_dataset_from_collector_env(collector_env: DataCollectorV0).

import minari
import gynasium as gym

env = gym.make('CartPole-v1')   
collector_env = minari.DataCollectorV0(env)

dataset_id = 'cartpole-test-v0'

# Collect 1000 episodes for the dataset
for n_step in range(1000):
	while True:
    	action = collector_env.action_space.sample()
    	obs, rew, terminated, truncated, info = collector_env.step(action)
    	if terminated or truncated:

	# Checkpoint data after each 100 episodes
	if (n_step + 1) % 100 == 0:
    	# If the Minari dataset id does not exist create a new dataset, otherwise update the existing one
    	if dataset_id not in minari.list_local_datasets():
        	dataset = minari.create_dataset_from_collector_env(collector_env=collector_env, dataset_id=dataset_id)

We provide a curated tutorial in the documentation on how to use these dataset creation tools:

Finally, multiple existent datasets can be combined into a larger dataset. This requires that the datasets to be combined have the same observation/action space as well as the same EnvSpec (except for the max_episode_steps argument for which the largest will be selected among all the datasets)

Multiple already existent Minari datasets can be combined under a different name as follows:

dataset_v1 = minari.load_dataset('dataset-v1')
dataset_v2 = minari.load_dataset('dataset-v2')

dataset_v3 = minari.combine_datasets(datasets_to_combine = [dataset_v1, dataset_v2], new_dataset_id = 'dataset-v3')


To improve accessibility to the remote public datasets, we are also including a CLI tool with commands to list, download, and upload Minari datasets.

New Public Datasets

Bellow is a list of new available dataset ids from different Gymnasium environments. These datasets have been re-created from the original D4RL project.








Released on 2023-01-04 - GitHub - PyPI

What's Changed

New Contributors

Full Changelog: 0.1.0...0.2.2


Released on 2022-11-04 - GitHub - PyPI

What's Changed

New Contributors

Full Changelog: