Open DY-Z opened 2 years ago
@DY-Z Yes, it is achievable. Although the current implementation of DMC does not explicitly support this, it can be easily achieved. The simplest modification I have in mind is to modify here https://github.com/datamllab/rlcard/blob/master/rlcard/agents/dmc_agent/utils.py#L86
Instead of feeding the environment with all DMC agents, just simply feed it with one DMC agent and some random agents. In this way, the DMC agent will be trained in the same way as playing with random agents. Hope this would help.
I'd like to train DMC agents against random agents, i.e., DMC agents and random agents exist simultaneously during training. Is that feasible using RLCard? If yes, will there be many modifications of code?