In this PR we have uniformed the recurrent PPO implementation. In particular:
The agent is now composed by:
A feature extractor: this extracts features from both 1D (vector) and 3D (images) observations
A RecurrentModel: this is composed by an optional pre-rnn-mlp, an LSTM and an optional post-rnn-mlp
An actor
A Critic
The LSTM takes in input the concatenation of the features extracted from the feature extractor and the previous played action
The environment interaction is now more clear
Type of Change
Please select the one relevant option below:
Bug fix (non-breaking change that solves an issue)
Breaking change (fix or feature that would cause existing functionality to not work as expected)
Checklist
Please confirm that the following tasks have been completed:
[x] I have tested my changes locally and they work as expected. (Please describe the tests you performed.)
[x] I have added unit tests for my changes, or updated existing tests if necessary.
[x] I have updated the documentation, if applicable.
[x] I have installed pre-commit and run locally for my code changes.
Thank you for your contribution! Once you have filled out this template, please ensure that you have assigned the appropriate reviewers and that all tests have passed.
Summary
In this PR we have uniformed the recurrent PPO implementation. In particular:
RecurrentModel
: this is composed by an optionalpre-rnn-mlp
, an LSTM and an optionalpost-rnn-mlp
Type of Change
Please select the one relevant option below:
Checklist
Please confirm that the following tasks have been completed:
Thank you for your contribution! Once you have filled out this template, please ensure that you have assigned the appropriate reviewers and that all tests have passed.