Skip to content
Permalink
main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Go to file
 
 
Cannot retrieve contributors at this time

The aim for this lab session is to get you up and running with Farama.org Gymnasium. If you’ve previously heard of OpenAI’s Gym, then this is the replacement after OpenAI dropped support.

CleanRL provides implementations of RL algorithms that can be used in conjunction with Gymnasium.

Installation

For the first part of the tutorial, you only need to install Gymnasium.

  1. Make sure you have a version of python3 installed >=3.7.1 and <3.10. Note that 3.10 is not currently supported by CleanRL.
  2. Installation documentation for Gymnasium is provided at https://github.com/Farama-Foundation/Gymnasium#installation.
  3. You will need Poetry https://python-poetry.org/docs/ for CleanRL.
  4. You can install CleanRL following the notes at https://github.com/vwxyzjn/cleanrl#get-started.

Documentation is at https://gymnasium.farama.org

Trying things out

Tabular Q-learning on your own

A good place to start is with this blog post:

https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0

Note that this post uses jupyter notebook but you are welcome to use python or ipython.

Using one of the CleanRL algorithms

See the https://github.com/vwxyzjn/cleanrl page for how to run a pre-written RL algorithm (such as, PPO or DQN) on one of the example environments.