HumanCompatibleAI / evaluating-rewards

Library to compare and evaluate reward functions
https://arxiv.org/abs/2006.13900
Apache License 2.0
61 stars 7 forks source link

Interpretability in realistic environments #14

Open AdamGleave opened 4 years ago

AdamGleave commented 4 years ago

Apply sparsification techniques to higher-dimensional, more realistic environments than the simple evaulating_rewards/PointMassLine-v0 used so far.

codecov[bot] commented 4 years ago

Codecov Report

Merging #14 into master will decrease coverage by 1.64%. The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #14      +/-   ##
==========================================
- Coverage   84.34%   82.70%   -1.65%     
==========================================
  Files          45       48       +3     
  Lines        2906     3035     +129     
==========================================
+ Hits         2451     2510      +59     
- Misses        455      525      +70     
Impacted Files Coverage Δ
src/evaluating_rewards/envs/core.py 90.00% <0.00%> (ø)
src/evaluating_rewards/envs/lunar_lander.py 100.00% <0.00%> (ø)
src/evaluating_rewards/interpretability.py 0.00% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update d1a4ae1...ad16879. Read the comment docs.