Closed geosciz closed 7 years ago
Merging #30 into master will increase coverage by
3.38%
. The diff coverage is88.23%
.
@@ Coverage Diff @@
## master #30 +/- ##
==========================================
+ Coverage 56.05% 59.44% +3.38%
==========================================
Files 3 3
Lines 289 323 +34
Branches 46 52 +6
==========================================
+ Hits 162 192 +30
- Misses 111 115 +4
Partials 16 16
Impacted Files | Coverage Δ | |
---|---|---|
floater/utils.py | 33.11% <88.23%> (+16.01%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 67c9697...698e10b. Read the comment docs.
I repeat my overall comment here:
This is a good start. But there is a ways to go before this is ready to merge. You should
floater_convert
For the tests, you will probably want to add a new gzipped file with some sample .csv data. (It should be a very small subset, just a few KB of data will do.)
@geosciz: do you know how to run the test suite?
@geosciz do you understand how to run the test suite locally?
py.test -v floater
Hi @rabernat,
I don't quite understand how to run the test suite locally. Previously, I just push new commits, and run it on Travis CI to test the code.
Is there any easy way to migrate this csv_to_netcdf
branch from @geosciz's fork to the main @rabernat fork (without having to restart this review from scratch)? Having it on @rabernat's fork would make it easy for me to test it and make commits.
I don't quite understand how to run the test suite locally. Previously, I just push new commits, and run it on Travis CI to test the code.
Did you try the command I typed above? Run py.test -v floater
from the command line in the floater top directory.
Is there any easy way to migrate this csv_to_netcdf branch from @geosciz's fork to the main @rabernat fork (without having to restart this review from scratch)? Having it on @rabernat's fork would make it easy for me to test it and make commits.
That's not necessary. You can check out any pull request locally following these instructions: https://help.github.com/articles/checking-out-pull-requests-locally/
Hi @rabernat, I just tried it. A test suite is running.
One of the tests is really slow because it has to write a huge file.
I recommend selecting just your test with the -k
flag while you are developing it. Please use the pytest docs to understand how to do this.
@rabernat Sure. Thanks for your help!
FYI, @geosciz, the most recent bug is exactly why we need tests for the code. Tests would catch that error.
Please move forward with adding tests to this PR. Otherwise we will continue to waste time on bugs.
And the tests passed! :)
@geosciz, can you explain what happens when floats disappear? We know that floats die over the course of the simulation, so that by day 90, not all of the nparts are there. This is fine for each individual file. But what happens when we re-open those files using open_mfsdataset
? Does it automatically fill in the missing nparts with NaN?
An alternative could be to "reindex" the datasets with the full npart range before calling to_netcdf
.
Hi @rabernat, when I look at the npart
on day 90, it seems that the length of npart
is still 37136797. I use np.isnan(npart)
to check the array, returning no NaN in it.
Ok, I guess this is ready to go! Great work @geosciz!
Hi @rabernat and @nathanieltarshish,
Please take a look at this pull request for converting MITgcm output CSV files to NetCDF files. Feel free to give me feedbacks on it.
Best regards,
@geosciz