HumanCompatibleAI / evaluating-rewards

Library to compare and evaluate reward functions
https://arxiv.org/abs/2006.13900
Apache License 2.0
61 stars 7 forks source link

Script to create double-blind version of source code #18

Closed AdamGleave closed 4 years ago

AdamGleave commented 4 years ago

Required for many conference submissions.

Derivative of https://github.com/HumanCompatibleAI/adversarial-policies/blob/master/scripts/doubleblind.sh

codecov[bot] commented 4 years ago

Codecov Report

Merging #18 into master will decrease coverage by 7.53%. The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff            @@
##           master     #18      +/-   ##
=========================================
- Coverage   84.34%   76.8%   -7.54%     
=========================================
  Files          45      45              
  Lines        2906    2906              
=========================================
- Hits         2451    2232     -219     
- Misses        455     674     +219
Impacted Files Coverage Δ
src/evaluating_rewards/analysis/plot_pm_reward.py 30.48% <0%> (-57.32%) :arrow_down:
...luating_rewards/experiments/point_mass_analysis.py 25% <0%> (-53.13%) :arrow_down:
src/evaluating_rewards/analysis/results.py 23.07% <0%> (-46.16%) :arrow_down:
...uating_rewards/analysis/plot_divergence_heatmap.py 28% <0%> (-38%) :arrow_down:
src/evaluating_rewards/analysis/visualize.py 68.62% <0%> (-13.24%) :arrow_down:
src/evaluating_rewards/envs/point_mass.py 82.63% <0%> (-0.6%) :arrow_down:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update a3de44a...2bb67bc. Read the comment docs.