HumanCompatibleAI / evaluating-rewards

Library to compare and evaluate reward functions
https://arxiv.org/abs/2006.13900
Apache License 2.0
61 stars 7 forks source link

Use benchmark_environments test code #16

Closed AdamGleave closed 4 years ago

AdamGleave commented 4 years ago

Following https://github.com/HumanCompatibleAI/benchmark-environments/pull/6 and https://github.com/HumanCompatibleAI/imitation/pull/170 the environment smoke test code has been migrated.

Tests will fail until those above PRs are merged.

codecov[bot] commented 4 years ago

Codecov Report

Merging #16 into master will increase coverage by 0.01%. The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #16      +/-   ##
==========================================
+ Coverage   84.53%   84.54%   +0.01%     
==========================================
  Files          45       45              
  Lines        2890     2893       +3     
==========================================
+ Hits         2443     2446       +3     
  Misses        447      447
Impacted Files Coverage Δ
src/evaluating_rewards/envs/__init__.py 100% <100%> (ø) :arrow_up:
tests/test_envs.py 100% <100%> (ø) :arrow_up:
tests/common.py 100% <100%> (ø) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 59c2f25...0e6f6d6. Read the comment docs.