OpenDILab Decision AI Engine





GitHub Stars



Last Commit

3d ago







Apache License, Version 2.0



PyPI Conda Conda update PyPI - Python Version PyTorch Version Libraries.io dependency status for GitHub repo

Loc Comments

Style Docs Unittest Algotest Platformtest codecov

GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

Updated on 2021.08.03 DI-engine-v0.1.1 (beta)

Introduction to DI-engine (beta)

DI-engine is a generalized Decision Intelligence engine. It supports most basic deep reinforcement learning (DRL) algorithms, such as DQN, PPO, SAC, and domain-specific algorithms like QMIX in multi-agent RL, GAIL in inverse RL, and RND in exploration problems. Various training pipelines and customized decision AI applications are also supported. Have fun with exploration and exploitation.


System Optimization and Design



You can simply install DI-engine from PyPI with the following command:

pip install DI-engine

If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:

conda install -c opendilab di-engine

For more information about installation, you can refer to installation.


The detailed documentation are hosted on doc(中文文档).

Quick Start

3 Minutes Kickoff

3 Minutes Kickoff(colab)

3 分钟上手中文版(kaggle)

Bonus: Train RL agent in one line code:

ding -m serial -e cartpole -p dqn -s 0


Algorithm Versatility

NoAlgorithmLabelImplementationRunnable Demo
1DQNdiscretepolicy/dqnpython3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0
2C51discretepolicy/c51ding -m serial -c cartpole_c51_config.py -s 0
3QRDQNdiscretepolicy/qrdqnding -m serial -c cartpole_qrdqn_config.py -s 0
4IQNdiscretepolicy/iqnding -m serial -c cartpole_iqn_config.py -s 0
5Rainbowdiscretepolicy/rainbowding -m serial -c cartpole_rainbow_config.py -s 0
6SQLdiscretecontinuouspolicy/sqlding -m serial -c cartpole_sql_config.py -s 0
7R2D2distdiscretepolicy/r2d2ding -m serial -c cartpole_r2d2_config.py -s 0
8A2Cdiscretepolicy/a2cding -m serial -c cartpole_a2c_config.py -s 0
9PPOdiscretecontinuouspolicy/ppopython3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0
10PPGdiscretepolicy/ppgpython3 -u cartpole_ppg_main.py
11ACERdiscretecontinuouspolicy/acerding -m serial -c cartpole_acer_config.py -s 0
12IMPALAdistdiscretepolicy/impalading -m serial -c cartpole_impala_config.py -s 0
13DDPGcontinuouspolicy/ddpgding -m serial -c pendulum_ddpg_config.py -s 0
14TD3continuouspolicy/td3python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0
15SACcontinuouspolicy/sacding -m serial -c pendulum_sac_config.py -s 0
16QMIXMARLpolicy/qmixding -m serial -c smac_3s5z_qmix_config.py -s 0
17COMAMARLpolicy/comading -m serial -c smac_3s5z_coma_config.py -s 0
18QTranMARLpolicy/qtranding -m serial -c smac_3s5z_qtran_config.py -s 0
19WQMIXMARLpolicy/wqmixding -m serial -c smac_3s5z_wqmix_config.py -s 0
20CollaQMARLpolicy/collaqding -m serial -c smac_3s5z_collaq_config.py -s 0
21GAILILreward_model/gailding -m serial_reward_model -c cartpole_dqn_config.py -s 0
22SQILILentry/sqilding -m serial_sqil -c cartpole_sqil_config.py -s 0
23HERexpreward_model/herpython3 -u bitflip_her_dqn.py
24RNDexpreward_model/rndpython3 -u cartpole_ppo_rnd_main.py
25CQLofflinepolicy/cqlpython3 -u d4rl_cql_main.py
26PERotherworker/replay_bufferrainbow demo
27GAEotherrl_utils/gaeppo demo

discrete means discrete action space, which is only label in normal DQL algorithms(1-15)

continuous means continuous action space, which is only label in normal DQL algorithms(1-15)

dist means distributed training (collector-learner parallel) RL algorithm

MARL means multi-agent RL algorithm

exp means RL algorithm which is related to exploration and sparse reward

IL means Imitation Learning, including Behaviour Cloning, Inverse RL, Adversarial Structured IL

offline means offline RL algorithm

other means other sub-direction algorithm, usually as plugin-in in the whole pipeline

P.S: The .py file in Runnable Demo can be found in dizoo

Environment Versatility

NoEnvironmentLabelVisualizationdizoo link
1ataridiscreteoriginaldizoo link
2box2d/bipedalwalkercontinuousoriginaldizoo link
3box2d/lunarlanderdiscreteoriginaldizoo link
4classic_control/cartpolediscreteoriginaldizoo link
5classic_control/pendulumdiscreteoriginaldizoo link
6competitive_rldiscrete marloriginaldizoo link
7gfootballdiscretesparseoriginaldizoo link
8minigriddiscretesparseoriginaldizoo link
9mujococontinuousoriginaldizoo link
10multiagent_particlediscrete marloriginaldizoo link
11overcookeddiscrete marloriginaldizoo link
12procgendiscreteoriginaldizoo link
13pybulletcontinuousoriginaldizoo link
14smacdiscrete marlsparseoriginaldizoo link
15d4rlofflineoridizoo link
16league_demodiscrete marloriginaldizoo link
17pomdp ataridiscretedizoo link
18bsuitediscreteoriginaldizoo link

discrete means discrete action space

continuous means continuous action space

MARL means multi-agent RL environment

sparse means environment which is related to exploration and sparse reward

offline means offline RL environment

P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type


We appreciate all contributions to improve DI-engine, both algorithms and system designs. Please refer to CONTRIBUTING.md for more guides. And our roadmap can be accessed by this link.

And users can join our slack communication channel or our forum for more detailed discussion.

For future plans or milestones, please refer to our GitHub Projects.


    title={{DI-engine: OpenDILab} Decision Intelligence Engine},
    author={DI-engine Contributors},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/opendilab/DI-engine}},


DI-engine released under the Apache 2.0 license.

Rate & Review

Great Documentation0
Easy to Use0
Highly Customizable0
Bleeding Edge0
Responsive Maintainers0
Poor Documentation0
Hard to Use0
Unwelcoming Community0