HumanCompatibleAI / adversarial-policies

Find best-response to a fixed policy in multi-agent RL
MIT License
275 stars 47 forks source link

Integrating features #4

Closed kantneel closed 5 years ago

kantneel commented 5 years ago

Conditional annealers Adversarial epsilon-ball noise agents using different rl algorithms

codecov[bot] commented 5 years ago

Codecov Report

Merging #4 into master will increase coverage by 1.58%. The diff coverage is 87.15%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master       #4      +/-   ##
==========================================
+ Coverage   72.41%   73.99%   +1.58%     
==========================================
  Files          29       29              
  Lines        1863     2023     +160     
==========================================
+ Hits         1349     1497     +148     
- Misses        514      526      +12
Flag Coverage Δ
#aprl 26.19% <4.12%> (-1.77%) :arrow_down:
#modelfree 56.59% <87.15%> (+3.08%) :arrow_up:
Impacted Files Coverage Δ
src/modelfree/logger.py 100% <ø> (ø) :arrow_up:
src/modelfree/train.py 92.01% <80%> (-1.07%) :arrow_down:
src/modelfree/scheduling.py 85.18% <82.27%> (-8.3%) :arrow_down:
src/modelfree/score_agent.py 90.9% <85.71%> (+0.13%) :arrow_up:
src/modelfree/shaping_wrappers.py 94.59% <90.56%> (-4.27%) :arrow_down:
src/aprl/envs/multi_agent.py 79.07% <95.91%> (+10.13%) :arrow_up:
src/modelfree/utils.py 94.3% <0%> (+0.81%) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 5f10cfd...170c1c7. Read the comment docs.