genrl

A PyTorch reinforcement learning library for generalizable and reproducible algorithm implementations with an aim to improve accessibility in RL

Showing:

Popularity

Downloads/wk

0

GitHub Stars

368

Maintenance

Last Commit

7mos ago

Contributors

20

Package

Dependencies

22

License

MIT

Categories

Readme



pypi PyPI pyversions Downloads codecov GitHub license

Language grade: Python Maintainability CodeFactor Total alerts

Build Status Documentation Status Tests MacOS Tests Linux Tests Windows

Slack - Chat


GenRL is a PyTorch reinforcement learning library centered around reproducible, generalizable algorithm implementations and improving accessibility in Reinforcement Learning

GenRL's current release is at v0.0.2. Expect breaking changes

Reinforcement learning research is moving faster than ever before. In order to keep up with the growing trend and ensure that RL research remains reproducible, GenRL aims to aid faster paper reproduction and benchmarking by providing the following main features:

  • PyTorch-first: Modular, Extensible and Idiomatic Python
  • Tutorials and Example: 20+ Tutorials from basic RL to SOTA Deep RL algorithm (with explanations)!
  • Unified Trainer and Logging class: code reusability and high-level UI
  • Ready-made algorithm implementations: ready-made implementations of popular RL algorithms.
  • Faster Benchmarking: automated hyperparameter tuning, environment implementations etc.

By integrating these features into GenRL, we aim to eventually support any new algorithm implementation in less than 100 lines.

If you're interested in contributing, feel free to go through the issues and open PRs for code, docs, tests etc. In case of any questions, please check out the Contributing Guidelines

Installation

GenRL is compatible with Python 3.6 or later and also depends on pytorch and openai-gym. The easiest way to install GenRL is with pip, Python's preferred package installer.

$ pip install genrl

Note that GenRL is an active project and routinely publishes new releases. In order to upgrade GenRL to the latest version, use pip as follows.

$ pip install -U genrl

If you intend to install the latest unreleased version of the library (i.e from source), you can simply do:

$ git clone https://github.com/SforAiDl/genrl.git
$ cd genrl
$ python setup.py install

Usage

To train a Soft Actor-Critic model from scratch on the Pendulum-v0 gym environment and log rewards on tensorboard

import gym

from genrl.agents import SAC
from genrl.trainers import OffPolicyTrainer
from genrl.environments import VectorEnv

env = VectorEnv("Pendulum-v0")
agent = SAC('mlp', env)
trainer = OffPolicyTrainer(agent, env, log_mode=['stdout', 'tensorboard'])
trainer.train()

To train a Tabular Dyna-Q model from scratch on the FrozenLake-v0 gym environment and plot rewards:

import gym

from genrl.agents import QLearning
from genrl.trainers import ClassicalTrainer

env = gym.make("FrozenLake-v0")
agent = QLearning(env)
trainer = ClassicalTrainer(agent, env, mode="dyna", model="tabular", n_episodes=10000)
episode_rewards = trainer.train()
trainer.plot(episode_rewards)

Tutorials

Algorithms

Deep RL

  • DQN (Deep Q Networks)
    • DQN
    • Double DQN
    • Dueling DQN
    • Noisy DQN
    • Categorical DQN
  • VPG (Vanilla Policy Gradients)
  • A2C (Advantage Actor-Critic)
  • PPO (Proximal Policy Optimization)
  • DDPG (Deep Deterministic Policy Gradients)
  • TD3 (Twin Delayed DDPG)
  • SAC (Soft Actor Critic)

Classical RL

  • SARSA
  • Q Learning

Bandit RL

  • Multi Armed Bandits
    • Eps Greedy
    • UCB
    • Thompson Sampling
    • Bayesian Bandits
    • Softmax Explorer
  • Contextual Bandits
    • Eps Greedy
    • UCB
    • Thompson Sampling
    • Bayesian Bandits
    • Softmax Explorer
  • Deep Contextual Bandits
    • Variation Inference
    • Noise sampling for neural network parameters
    • Epsilon greedy with a neural network
    • Bayesian Regression on for posterior inference
    • Bootstraped Ensemble

Credits and Similar Libraries:

Rate & Review

Great Documentation0
Easy to Use0
Performant0
Highly Customizable0
Bleeding Edge0
Responsive Maintainers0
Poor Documentation0
Hard to Use0
Slow0
Buggy0
Abandoned0
Unwelcoming Community0
100
No reviews found
Be the first to rate

Alternatives

No alternatives found

Tutorials

No tutorials found
Add a tutorial