HumanCompatibleAI / evaluating-rewards

Library to compare and evaluate reward functions
https://arxiv.org/abs/2006.13900
Apache License 2.0
61 stars 7 forks source link

Improve CircleCI Parallelism #1

Closed AdamGleave closed 4 years ago

AdamGleave commented 4 years ago

Shard by test name.

This is a degrade from before in the sense of not using test times, but in practice is a significant improvement as splitting on granularity of test names rather than class names. Since most of our tests are in a single file (with no classes), this results in a 2-3x speedup.

We might be able to make CircleCI split on test names based on timing, but it'd require changing PyTest 's JUnit output format. See https://github.com/ding2/ding2/pull/1205 for an example (not Python).

codecov[bot] commented 4 years ago

Codecov Report

Merging #1 into master will not change coverage. The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master       #1   +/-   ##
=======================================
  Coverage   65.93%   65.93%           
=======================================
  Files          37       37           
  Lines        2287     2287           
=======================================
  Hits         1508     1508           
  Misses        779      779
Impacted Files Coverage Δ
tests/test_synthetic.py 100% <ø> (ø) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 858ff62...efdb837. Read the comment docs.