d3rlpy is an offline deep reinforcement learning library for practitioners and researchers.
import d3rlpy
dataset, env = d3rlpy.datasets.get_dataset("hopper-medium-v0")
# prepare algorithm
sac = d3rlpy.algos.SAC()
# train offline
sac.fit(dataset, n_steps=1000000)
# train online
sac.fit_online(env, n_steps=1000000)
# ready to control
actions = sac.predict(x)
d3rlpy
.d3rlpy supports Linux, macOS and Windows.
$ pip install d3rlpy
$ conda install -c conda-forge d3rlpy
$ docker run -it --gpus all --name d3rlpy takuseno/d3rlpy:latest bash
algorithm | discrete control | continuous control | offline RL? |
---|---|---|---|
Behavior Cloning (supervised learning) | ✅ | ✅ | |
Neural Fitted Q Iteration (NFQ) | ✅ | ⛔ | ✅ |
Deep Q-Network (DQN) | ✅ | ⛔ | |
Double DQN | ✅ | ⛔ | |
Deep Deterministic Policy Gradients (DDPG) | ⛔ | ✅ | |
Twin Delayed Deep Deterministic Policy Gradients (TD3) | ⛔ | ✅ | |
Soft Actor-Critic (SAC) | ✅ | ✅ | |
Batch Constrained Q-learning (BCQ) | ✅ | ✅ | ✅ |
Bootstrapping Error Accumulation Reduction (BEAR) | ⛔ | ✅ | ✅ |
Conservative Q-Learning (CQL) | ✅ | ✅ | ✅ |
Advantage Weighted Actor-Critic (AWAC) | ⛔ | ✅ | ✅ |
Critic Reguralized Regression (CRR) | ⛔ | ✅ | ✅ |
Policy in Latent Action Space (PLAS) | ⛔ | ✅ | ✅ |
TD3+BC | ⛔ | ✅ | ✅ |
Implicit Q-Learning (IQL) | ⛔ | ✅ | ✅ |
d3rlpy is benchmarked to ensure the implementation quality. The benchmark scripts are available reproductions directory. The benchmark results are available d3rlpy-benchmarks repository.
import d3rlpy
# prepare dataset
dataset, env = d3rlpy.datasets.get_d4rl('hopper-medium-v0')
# prepare algorithm
cql = d3rlpy.algos.CQL(use_gpu=True)
# train
cql.fit(
dataset,
eval_episodes=dataset,
n_epochs=100,
scorers={
'environment': d3rlpy.metrics.evaluate_on_environment(env),
'td_error': d3rlpy.metrics.td_error_scorer,
},
)
See more datasets at d4rl.
import d3rlpy
from sklearn.model_selection import train_test_split
# prepare dataset
dataset, env = d3rlpy.datasets.get_atari('breakout-expert-v0')
# split dataset
train_episodes, test_episodes = train_test_split(dataset, test_size=0.1)
# prepare algorithm
cql = d3rlpy.algos.DiscreteCQL(
n_frames=4,
q_func_factory='qr',
scaler='pixel',
use_gpu=True,
)
# start training
cql.fit(
train_episodes,
eval_episodes=test_episodes,
n_epochs=100,
scorers={
'environment': d3rlpy.metrics.evaluate_on_environment(env),
'td_error': d3rlpy.metrics.td_error_scorer,
},
)
See more Atari datasets at d4rl-atari.
import d3rlpy
import gym
# prepare environment
env = gym.make('HopperBulletEnv-v0')
eval_env = gym.make('HopperBulletEnv-v0')
# prepare algorithm
sac = d3rlpy.algos.SAC(use_gpu=True)
# prepare replay buffer
buffer = d3rlpy.online.buffers.ReplayBuffer(maxlen=1000000, env=env)
# start training
sac.fit_online(env, buffer, n_steps=1000000, eval_env=eval_env)
Try cartpole examples on Google Colaboratory!
More tutorial documentations are available here.
Any kind of contribution to d3rlpy would be highly appreciated! Please check the contribution guide.
The release planning can be checked at milestones.
Channel | Link |
---|---|
Chat | Gitter |
Issues | GitHub Issues |
Project | Description |
---|---|
d4rl-pybullet | An offline RL datasets of PyBullet tasks |
d4rl-atari | A d4rl-style library of Google's Atari 2600 datasets |
MINERVA | An out-of-the-box GUI tool for offline RL |
The roadmap to the future release is available in ROADMAP.md.
The paper is available here.
@InProceedings{seno2021d3rlpy,
author = {Takuma Seno, Michita Imai},
title = {d3rlpy: An Offline Deep Reinforcement Library},
booktitle = {NeurIPS 2021 Offline Reinforcement Learning Workshop},
month = {December},
year = {2021}
}
This work is supported by Information-technology Promotion Agency, Japan (IPA), Exploratory IT Human Resources Project (MITOU Program) in the fiscal year 2020.
Version | Tag | Published |
---|---|---|
1.1.1 | 9mos ago | |
1.1.0 | 1yr ago | |
1.0.0 | 1yr ago | |
0.91 | 2yrs ago |