rickstaa / stable-learning-control

A framework for training theoretically stable (and robust) Reinforcement Learning control algorithms.
https://rickstaa.dev/stable-learning-control
MIT License
6 stars 1 forks source link
artificial-intelligence control deep-learning framework gaussian-networks gymnasium machine-learning neural-networks openai-gym reinforcement-learning reinforcement-learning-agents reinforcement-learning-algorithms robustness simulation stability

Stable Learning Control

Stable Learning Control GitHub release (latest by date) Python 3 codecov Contributions DOI Weights & Biases dashboard

Package Overview

The Stable Learning Control (SLC) framework is a collection of robust Reinforcement Learning control algorithms designed to ensure stability. These algorithms are built upon the Lyapunov actor-critic architecture introduced by Han et al. 2020. They guarantee stability and robustness by leveraging Lyapunov stability theory. These algorithms are specifically tailored for use with gymnasium environments that feature a positive definite cost function. Several ready-to-use compatible environments can be found in the stable-gym package.

Installation and Usage

Please see the docs for installation and usage instructions.

Contributing

We use husky pre-commit hooks and github actions to enforce high code quality. Please check the contributing guidelines before contributing to this repository.

[!NOTE]\ We used husky instead of pre-commit, which is more commonly used with Python projects. This was done because only some tools we wanted to use were possible to integrate the Please feel free to open a PR if you want to switch to pre-commit if this is no longer the case.

References