Closed sedand closed 4 years ago
Thanks for pointing this out. I've never used the evaluate script (which came with the original torch_rl package), so I didn't adapt it to the changes I made to the agent.
I'll look into it on the weekend. If you need it urgently, it might work if you just take out all the references to memories
and recurrent
from the /torch_rl/utils/agent.py
file.
Ah, I see. I thought this script was used to generate the middle plot of Figure 2 in the paper (success rate VS nr. rooms). The right plot (return VS frames) of Figure 2 can be reproduced using the plots.py script, correct? Thanks a lot for your help & time!
I've fixed evaluate.py
(which just runs one agent and tests how well it does) and visualize.py
(which renders one agent), at least they're working for me now. Please let me know if not for you.
I also added detailed_multiroom_results.py
, which I used to generate the success rate vs. nr rooms plot. Thanks for pointing out it was missing - It wasn't part of the repo yet because I did it after I had finished the internship during which the project was done.
And yes plots.py
produces the right plot.
I'll close the issue for now but please let me know if you have any questions or something's still not working!
(be sure to also pass --fullObs to the visualize and evaluate scripts if the agent was trained on the fully observable environment. For the detailed_multiroom_results.py
there's a corresponding flag in the script.)
Hi there, I've run into a problem running the evaluate.py and visualize.py scripts (using models trained on the gridworld).
Maybe it is a misunderstanding on my side, on how these scripts are meant to be used, but they don't seem to be "compatible" with the kind of ACModel (IBAC-SNI/torch_rl/model.py) saved after a training, as they don't contain the required attributes (e.g. recurrent) or methods (forward(..) )
Steps to reproduce: 1: Train a gridworld model as stated in the readme:
2: Run the evaluation script on the result folder
Error message:
Thanks for your help!