Closed rusu24edward closed 3 years ago
Seems like there was an API change in the mlagents package. The set_action_for_agent now takes an ML-Agents ActionTuple
object instead of a simple action (np.array). Looks different in the older mlagents version that I used when I wrote the example script.
Let me see, whether I can fix this on the RLlib side. ....
Yeah, this was an API change on Unity's end. I can reproduce this now and provide a fix for RLlib ... Thanks for reporting this @rusu24edward !
Ah, great! Thanks for picking this up so quickly, Sven!
Should be merged today or very early next week.
What is the problem?
I am attempting to follow the local unity example. I have followed the instructions, and I receive an error while attempting to train with
python3 unity3d_env_local.py --env SoccerStrikersVsGoalie
and press the play button in my unity editor:Ray version and other system information (Python version, TensorFlow version, OS): Python 3.6.9 RLlib 1.2.0 mlagents 0.24.0 Unity 2018.4.32f1 (this is required version to open the ml-agents example projects) tf 2.4.1 Ubuntu 18.04
Reproduction (REQUIRED)
To reproduce, follow the steps in the unity example.
For reference, here's the full stacktrace: