Closed sAz-G closed 6 months ago
So it looks like the enjoy
code expects the keys to be weight tensors to be named actor_encoder.neighbor_encoder.neighbor_mlp.0.weight
(notice the neighbor_mlp
) part, and the model that you're trying to evaluate has a name actor_encoder.neighbor_encoder.neighbor_mlp.0.weight
there.
Could it be that the model was produced by a different version of the code? Or some variables got renamed?
Also, I think this question is better asked on https://github.com/Zhehui-Huang/quad-swarm-rl/ or we can also try to summon @Zhehui-Huang here :)
This issue means you load the wrong model. You can use the debug mode and see if you load the model in the correct path. Besides, please make sure the code you used for training is the same as using enjoy script.
I could not find out why this error happened. Maybe I changed the configuration after the training started by mistake (I was running multiple training sessions).
Anyway I ran multiple training sessions afterwards and I could load the different models in the simulation
Thank you for the update!
When I run the command
python -m swarm_rl.enjoy --algo=APPO --env=quadrotor_multi --replay_buffer_sample_prob=0 --quads_use_numba=False --train_dir=/home/saz/quad-swarm-rl/train_dir --experiment=mean_embed_16_8 --quads_view_mode side --quads_render=True
in quadswarms, I get the following error:For the training I changed the size of the hidden layers to 16 for the self encoder and 8 for the neighbor encoder, according to the model proposed in the paper Decentralized Control of Quadrotor Swarms with End-to-end Deep Reinforcement Learning for mean embedding.
The command for running the training is a modified version of train_local.sh
How can I avoid the error?