JuliaPOMDP / DeepQLearning.jl

Implementation of the Deep Q-learning algorithm to solve MDPs
Other
72 stars 13 forks source link

Support of AbtractEnvironment #34

Open MaximeBouton opened 4 years ago

MaximeBouton commented 4 years ago

This solver uses some function that are broader than the minimal interface defined in RLInterface and relies on internal fields such as env.problem in many places. Ideally, the solver should support an RL environment defined just using RLInterface.jl and without necessarily an MDP or POMDP object associated with it.

zsunberg commented 4 years ago

Yes, this is definitely important. In my class, more students had success with this package than any other, but this made it a little confusing to use.

MaximeBouton commented 4 years ago

Right now it is really designed to work with POMDPs.jl. Any AbtractEnvironment could technically be implemented using MDP and the generative interface. initialstate, gen and actions is all what's needed I believe.