Closed tkelestemur closed 3 years ago
/test
Successfully created a job for commit 784997e:
I think this PR is not accepted due to the incompatibility of I/F. (as far as I see, are only tests matters? or do users use run_evaluation_episodes directly?)
https://github.com/pfnet/pfrl/search?q=run_evaluation_episodes
made a PR to adjust tests top on this commits at https://github.com/tkelestemur/pfrl/pull/1
@ummavi could you give us feedback on this proposal?
/test
Successfully created a job for commit 700fa5e:
Thank you very much to both of you for this very useful PR! I apologize for taking so long for me to get to it.
This looks good to go as soon as the linter error is taken care of.
@tkelestemur, could you please remove the trailing space in the following line or apply black to pfrl/experiments/evaluator.py
to automatically fix it https://github.com/pfnet/pfrl/blob/700fa5ec53cc68e8ad16c82f385cb96d32ffabf6/pfrl/experiments/evaluator.py#L302
Hello, @ummavi I applied black so the trailing space issue should have been fixed.
/test
Successfully created a job for commit 87c52fd:
Thanks! There was a minor style issue I addressed in a PR to your fork.
/test
Successfully created a job for commit 0a41d39:
@tkelestemur, I fixed an unrelated issue that blocked this in #148. Please update your fork from master when you can.
@ummavi thanks for the fix. I updated my fork.
/test
Successfully created a job for commit b840cd9:
Thank you very much for all the work you put into this!
Currently, eval_performance does not return statistics for episode lengths which is an important metric for selecting RL hyper parameters such as gamma.
This PR basically returns the same statistics (mean, median, max, min, stdev) as the reward for episode lengths.