I changed the the MarsDiscreteEnv(MarsEnv) to produce continuous actions instead of the self.action_space = spaces.Discrete(3) which signals that the action space is discrete to the RL manager and thereby limits the implementation of continuous based RL algorithms. I have also added comments in my changes.
In the setup.py file of the rl-agent repo I changed the install to have newer version of gym and rl-coach to have access to a larger library of RL algorithms from RL coach.
I changed the the
MarsDiscreteEnv(MarsEnv)
to produce continuous actions instead of theself.action_space = spaces.Discrete(3)
which signals that the action space is discrete to the RL manager and thereby limits the implementation of continuous based RL algorithms. I have also added comments in my changes.In the
setup.py
file of therl-agent
repo I changed the install to have newer version ofgym
andrl-coach
to have access to a larger library of RL algorithms from RL coach.