HumanCompatibleAI / evaluating-rewards

Library to compare and evaluate reward functions
https://arxiv.org/abs/2006.13900
Apache License 2.0
61 stars 7 forks source link

Convert environments to fixed horizon #28

Closed AdamGleave closed 4 years ago

AdamGleave commented 4 years ago
codecov[bot] commented 4 years ago

Codecov Report

Merging #28 into master will decrease coverage by 0.01%. The diff coverage is 76.92%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #28      +/-   ##
==========================================
- Coverage   87.25%   87.23%   -0.02%     
==========================================
  Files          54       54              
  Lines        3608     3595      -13     
==========================================
- Hits         3148     3136      -12     
+ Misses        460      459       -1     
Impacted Files Coverage Δ
..._rewards/analysis/dissimilarity_heatmaps/config.py 66.23% <0.00%> (ø)
src/evaluating_rewards/experiments/env_rewards.py 0.00% <ø> (ø)
tests/test_rewards.py 100.00% <ø> (ø)
src/evaluating_rewards/envs/point_mass.py 83.90% <75.00%> (-0.10%) :arrow_down:
src/evaluating_rewards/envs/__init__.py 100.00% <100.00%> (ø)
src/evaluating_rewards/envs/mujoco.py 98.16% <100.00%> (ø)
...ewards/analysis/dissimilarity_heatmaps/heatmaps.py 90.00% <0.00%> (+1.25%) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 41bd8b8...c972465. Read the comment docs.

AdamGleave commented 4 years ago

Tests are failing because we're pointing at the benchmark-environments master which does not include my early registration. They pass on my local machine.

When https://github.com/HumanCompatibleAI/benchmark-environments/pull/12 gets merged, update requirements.txt to new SHA and re-run. It may make sense to wait for the other PRs to get merged too.