Closed Mming11 closed 9 months ago
Please take a look on this file:
We update the visualization script so that it can read RLLib checkpoints directly (without converting weight to numpy first).
Thank you. I'll try it this afternoon.
Please take a look on this file:
We update the visualization script so that it can read RLLib checkpoints directly (without converting weight to numpy first).
when i run new_vis.py , i meet an error. ToT
It seems that your model are trained with PyTorch and my script is meant for loading tensorflow checkpoints.
A workaround here is to replace the policy_function by the RLLib trainer.
A basic logic is:
...
trainer = PPOTrainer({some config})
trainer.restore(CHECKPOINT_FOLDER_PATH)
...
action = trainer.compute_single_action(obs)
o, r, d, i = env.step(action)
...
I admit that it is not necessary to use custom function like my script to load the trained agent. We should stick to RLLib trainer since it is more convenient.
When i have my checkpoint folder , how can i visualize it . When i run [vis_from_checkpoint.py] directly, there is something wrong with it .I know checkpoint need to be processed but i don not know how to do. I'm so sorry for my stupid... Hope you can give me hints in detail or examples. Thanks a lot. ToT