Closed ksettaluri6 closed 3 years ago
Hm, 0.6.3 is essentially deprecated. This is a poor suggestion but does it make sense just to nuke the repo and cp a new set of files over for 0.8.5?
Hi, I'm a bot from the Ray team :)
To help human contributors to focus on more relevant issues, I will automatically add the stale label to issues that have had no activity for more than 4 months.
If there is no further activity in the 14 days, the issue will be closed!
You can always ask for help on our discussion forum or Ray's public slack channel.
Hi again! The issue will be closed because there has been no more activity in the 14 days since the last message.
Please feel free to reopen or open a new issue if you'd still like it to be addressed.
Again, you can always ask for help on our discussion forum or Ray's public slack channel.
Thanks again for opening the issue!
I am trying to create a custom score metric, where I count any time the agent has reached a positive sparse reward of 10.
I modified the rllib/evaluation/episode.py with the following:
I then added these functions. The reached and unreached_count variables are a counter that I want to reset every time a training iteration finishes.
I know want to tabulate this metric across all agents/episodes, and initially had:
However, the on_train_result function does not have access to the episode class. Also, because I have an older version of Ray (0.6.3), I don't have the on_postprocess_traj function defined. I am unable to move to a newer version of Ray/Rllib due to issues with version control on the remote machine I am on.
My question is: where is the on_train_result function called, and is there a way I can modify it such that I get access to episode? Also because I am running this on 8 CPUs, I find that there are 8 different agents: how can I access all of their episode data during training?