This PR closes issues #13 , #12 , #10 and makes the inference pipeline more robust.
How did you implement your changes
Added the url to the cuda-compiled version of pytorch to the pyproject.toml to address #12
Changed the test-time augmentation loop in utils.py to address #13.
Hopefully, I addressed #14 by changing how the normalization is calculated. My guess is that for some FoVs, it might be possible to have no foreground, we get a NaN in the normalization value dict. I excluded nan from normalization value calculation that eventually fixes this.
I added some .DS_Store files to the data generation in the tests and added a sentence and a LOC to the inference notebook that says users should sort them out to address #10.
Switched macos CI backend from latest to 13, because it threw errors before because MPS (i.e. the apple silicon GPU) is not available on the CI.
Remaining issues
Probably #14 is not fixed by this, so it's left open.
What is the purpose of this PR?
This PR closes issues #13 , #12 , #10 and makes the inference pipeline more robust.
How did you implement your changes
pyproject.toml
to address #12utils.py
to address #13.Remaining issues Probably #14 is not fixed by this, so it's left open.