HumanCompatibleAI / evaluating-rewards

Library to compare and evaluate reward functions
https://arxiv.org/abs/2006.13900
Apache License 2.0
61 stars 7 forks source link

Update notebook, use separate version file #37

Closed AdamGleave closed 4 years ago

AdamGleave commented 4 years ago

Update notebook to install from Git.

Remove Colab link since Colab currently doesn't support 3.7

Use separate version file to be more DRY

codecov[bot] commented 4 years ago

Codecov Report

Merging #37 into master will decrease coverage by 0.90%. The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #37      +/-   ##
==========================================
- Coverage   87.56%   86.66%   -0.91%     
==========================================
  Files          58       59       +1     
  Lines        4182     4183       +1     
==========================================
- Hits         3662     3625      -37     
- Misses        520      558      +38     
Impacted Files Coverage Δ
src/evaluating_rewards/__init__.py 100.00% <100.00%> (ø)
src/evaluating_rewards/version.py 100.00% <100.00%> (ø)
src/evaluating_rewards/comparisons.py 80.48% <0.00%> (-13.83%) :arrow_down:
tests/test_tabular.py 86.59% <0.00%> (-13.41%) :arrow_down:
src/evaluating_rewards/scripts/npec_comparison.py 75.34% <0.00%> (-4.11%) :arrow_down:
tests/test_rewards.py 98.96% <0.00%> (-1.04%) :arrow_down:
src/evaluating_rewards/rewards.py 95.60% <0.00%> (-0.74%) :arrow_down:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update bff3a80...c589207. Read the comment docs.