Open isms opened 6 years ago
Would it be useful to add integrations such as CodeCov to continuously monitor whether a test really adds more coverage / a feature reduces the coverage?
Good thought, we have used and enjoyed codecov.io.
Maybe we can start with just a report in the test pipeline, and staff can look into setting up the integration if time permits.
Go through and add unit tests for Django/DRF or Python functions in the prediction service. (This is correctness testing separate from ML evaluation.)
Tests should not unduly slow down the build.
Points will be awarded continuously through the end of the competition -- this issue will not close.