Closed ihnorton closed 6 years ago
Merging #65 into master will increase coverage by
0.74%
. The diff coverage is100%
.
@@ Coverage Diff @@
## master #65 +/- ##
==========================================
+ Coverage 86.64% 87.39% +0.74%
==========================================
Files 6 6
Lines 352 357 +5
Branches 113 114 +1
==========================================
+ Hits 305 312 +7
+ Misses 23 22 -1
+ Partials 24 23 -1
Impacted Files | Coverage Δ | |
---|---|---|
nrrd/writer.py | 87.17% <100%> (ø) |
:arrow_up: |
nrrd/reader.py | 81.98% <100%> (+1.85%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update caf8506...4183a59. Read the comment docs.
Also, issue with Travis failing on Python 2.7, need to resolve this as well.
Addressed.
I would make these parameters as apart of the read function instead.
I made the compression level a parameter of write
and added the docstring.
But the field error override is kind of odd/special, and violates the nrrd spec, so it probably shouldn't be advertised.
Merging #65 into master will increase coverage by
0.74%
. The diff coverage is100%
.
@@ Coverage Diff @@
## master #65 +/- ##
==========================================
+ Coverage 86.64% 87.39% +0.74%
==========================================
Files 6 6
Lines 352 357 +5
Branches 113 114 +1
==========================================
+ Hits 305 312 +7
+ Misses 23 22 -1
+ Partials 24 23 -1
Impacted Files | Coverage Δ | |
---|---|---|
nrrd/reader.py | 81.98% <100%> (+1.85%) |
:arrow_up: |
nrrd/writer.py | 87.17% <100%> (ø) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update caf8506...77d5d2c. Read the comment docs.
Right now it overwrites and saves the last field given. Is this the best way to handle it?
Yes
Also, I was thinking maybe it would be worthwhile to write a test for testing the compression on gzip and bzip2. My thoughts are to set the compression level to something besides 9 for both of these and then test the file size is a certain size?
Done. For now I commented out the assert about file size difference for bz2, because for the binary ball data, output is the same size at all levels.
LGTM 👍
I guess I wasn't fully thinking of the application for the duplicate field section. I think I agree that it should not be a function parameter in read
or read_header
.
The tests look good!
Thanks!
Allow override of duplicate field error, with a warning instead (occasionally occur with buggy writers)
Allow changing the compression level - can significantly improve write performance at minimal space cost. As an illustrative example:
timing test details
```python import nrrd nrrd.reader._NRRD_ALLOW_DUPLICATE_FIELD = True ``` ```python img,hdr = nrrd.read("/path/to/dwi.nhdr") %timeit -n1 -r1 nrrd.write("test.nrrd", img, hdr) ``` 193 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) ```python hdr['encoding'] = 'gz' %timeit -n1 -r1 nrrd.write("test_zl_9.nrrd", img, hdr) ``` 20.2 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) ```python nrrd.writer._ZLIB_LEVEL = -1 hdr['encoding'] = 'gz' %timeit -n1 -r1 nrrd.write("test_zl_default.nrrd", img, hdr) ``` 9.94 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) ```python nrrd.writer._ZLIB_LEVEL = 1 hdr['encoding'] = 'gz' %timeit -n1 -r1 nrrd.write("test_zl_1.nrrd", img, hdr) ``` 2.68 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)Output files below (test.nrrd is original, and zl_default is roughly level 6 per the docs):
So, level 9 is ~10x slower than level 1, but saves only 1 MB. This is brain MRI data.