Closed ManuelZierl closed 4 years ago
On StarCraft, QMix from RLLib also seems to learn random policy even train many episodes, it still learns a random policy. https://github.com/oxwhirl/smac/issues/42
Taking a look at this now ...
The exploration's epsilon is never reduced, leading to always acting randomly (epsilon is always 1.0).
This PR fixes the issue. QMIX is now also using our exploration API (it wasn't before, which is why it broke with the introduction of that API in 0.8.4). It defaults to type: EpsilonGreedy
(same as DQN).
Also, a test case has been added to confirm simple learning capabilities on the above TwoStepGame.
https://github.com/ray-project/ray/pull/9527
@ManuelZierl Thanks for filing this! The fix should be merged tomorrow or over the WE.
@GoingMyWay
@sven1977 Great, I will try it to see if it can work on different SC scenarios.
I've alread posted this question on stackoverflow but since i didn't got an answer there i will repost it here (https://stackoverflow.com/questions/61523164/ray-rllib-qmix-doesnt-learn-anything)
I wanted to try out the QMIX implementation of Ray/Rllib library but there must be something wrong of how I'm using it because it doesn't seem to learn anything. Since I'm new to Ray/Rllib I started with the "TwoStepGame" example the libary provides as an example on there github repo (https://github.com/ray-project/ray/blob/master/rllib/examples/twostep_game.py), trying to understand how to use it. Since for the start this example was a little bit to complex for me I adjusted it to make a example that is as simple as possible. Problem: Qmix doesn't seem to learn, means the resulting reward pretty much matches the expected value of a random policy.
Let me explain the idea of my very simple experiment. We have 2 agents. Every agent can make 3 actions (
Discrete(3)
). If he makes the action 0 he gets a reward of 0.5 if not 0. So this should be a very simple task, since the best policy is just taking action 0.Here is my implementation:
and here is the result of the output when I run it for 100 000 timesteps
As you can see the policy seems to be random since the expected value is 1/3 and the resulting reward is 33.505 (because I reset the enviroment every 100 timesteps). My Question: What do i not understand? There must be something wrong with my configuration or maybe my understanding of how rllib works. But since the best policy is very very simpel (just always take action 0) it seems to me like this algorithm cannot learn.