Open MarcoMeter opened 5 years ago
I noticed that a deterministic policy can give different results on same seed with different realtime_mode settings back in 1.2, so different scores are not surprising. If i remember correctly, doing 1 or 2 steps forward with realtime_mode=True would often snap the agent back to the original position repeatedly, while realtime_mode=False worked fine. I made one of my agents stutter step to combat this :)
I guess I'll implement a video recorder to observe the agent's performance during inference. I'm using a stochastic policy.
Hi all,
Thanks for bringing this to our attention. There should be no differences between the two modes, but clearly that is not the case. We will look into this.
I noticed that sometimes when the communication between python and unity becomes slow (for example when the policy network runs slowly), it results in unexpected behavior, for example warping (suddenly appears in a totally different position).
Anyone managed to find a solution / work around for this bug? Tried stutter stepping but it seems to make it even worse, showing walking animation but snapping back to the same place.
I have just noticed that in realtime mode doors open instantly, while when realtime mode is not enabled doors take multiple steps to actually open, often up to a second of in-game time.
Additionally, as mentioned by others when realtime mode is not enabled, animations do not play properly.
EDIT: workaround at the moment is totally ignoring realtime mode and rather using a custom window to handle interaction/visualisation.
I trained a model which can get a mean reward of 7 for solving seed 34 if realtime_mode is disabled. If I set realtime_mode to true to observe the agent playing, the mean reward for multiple episodes is 1.7.
Did anybody else observe such a huge difference on version 1.3?