Closed sharabhshukla closed 1 year ago
The short answer is that it's not possible, at least not in an easy way, not right now. However, if your environment has the same action and state space, then you could just create another agent, another MDP, and another core. then you could take the previous q table weights and set them into the new agent. If the state/action space is different, then you need to set the q table with appropriate (custom) logic. All tabular algorithms don't support features, so it's just learning a table of values for every state.
If you want to use features, you should use one of the continuous state spaces, e.g. Sarsa Lambda Continuous, by providing proper features for your environment.
Yeah, I took a somewhat similar approach, I am happy with the results but need to think if I have to move these models into prod. But thanks for the reply, it is very helpfull
I closed this issue, as I had the answer I needed. feel free to open it up, if you see a need
I trained an Qlearning agent in one environment and want to use that same trained agent in another slightly different environment. How can I do that ?