Closed fernando-az-infertrade closed 3 years ago
Excellent work.
My only (minor) suggestions would be:
i) maybe split off the negative/fail tests from the success cases into their own test, as correct error handling is a lower priority test and can fail more frequently (be 'brittle') in legitimate refactoring (e.g. were error is caught earlier or with a different but valid error exception), whereas failing success cases typically means you have made a serious break and the functional code is problematic.
ii) if you do split off the success test from the fail cases you could use the Pytest @pytest.mark.parameterize decorator to substitute the
for algorithm in algorithms
loops, which then means the tests will pass and fail individually based on each algorithm rather than failing at the first bad algorithm (and maybe hiding other fails). It also allows the testing to be parallelized, although the setup for each individual test may mean overall performance is not improved.
Done.
@fernando-az-infertrade - do you want to do a v0.0.33 release?
@fernando-az-infertrade - do you want to do a v0.0.33 release?
Sure.
The Api.return_representations method in infertrade/api.py (https://github.com/ta-oliver/infertrade/blob/main/infertrade/api.py#L204) exists but doesn't appear to be called anywhere in the whole repository. There isn't even a unit test for it. Not sure if it should be kept or deleted, but I've added a unit test for it.