coax-dev / coax

Modular framework for Reinforcement Learning in python
https://coax.readthedocs.io
MIT License
165 stars 17 forks source link

Recurrent Experience Replay #34

Open smorad opened 1 year ago

smorad commented 1 year ago

Is your feature request related to a problem? Please describe.

It seems that the implemented replay buffers only operate over transitions, with no ability to operate over entire sequences. This prevents the use of recurrent policies for tackling POMDPs.

Describe the solution you'd like

A SequenceReplayBuffer that returns contiguous episodes instead of shuffled transitions.

Describe alternatives you've considered

Additional context

KristianHolsheimer commented 1 year ago

Thanks, that's a very good suggestion. It's definitely been on mind.

I'm thinking of having a reward tracer that does something similar to what the frame stacking wrapper does. The idea is to stack entire transitions rather than only the observations. As long as we ensure to only create shallow copies (i.e. not copying the actual numpy arrays), I think we could keep this fairly lightweight and simple.

What do you think?

frederikschubert commented 1 year ago

You can also achieve something via the record_extra_info of the NStep reward tracer. Its a little besides the point but will give you the n observations, actions, etc. that follows a sampled observation.

smorad commented 1 year ago

I don't actually know enough about the architecture to provide good advice. I just found the design of coax really clean, and was considering porting some of my models to the framework.