Closed JamiePringle closed 4 years ago
I'll check that soon
Thank you -- needless to say, there is no rush since the work-around (manually setting the chunksize) is so straightforward. Jamie
Hi @JamiePringle , We did some considerable work for parcels 2.1.5 (which is the current head and also available via conda-forge) on the chunking. Thus, my first advice for you is to pull the new version and re-run your experiments with that (please).
A distinct look on your specific example tells me you wanna load the data into this other package, TRACMASS. Now, if you do all data and memory tracking there, you can set field_chunksize=False
- then, it will load the whole 3D field in one go, no chunking being done there. On the other hand, what you wanna try is just accessing each individual 2D layer (with the depth chunksize being '1'). In that case, there will also be no chunking, because chunking/defer loading only applies to read field data with more than 2 dimensions (hence: either a depth dimension larger than 1, or using the temporal dimension in parcels).
Your specific setup was until now not considered. A work-around is to address this as a 4D field, hence defining time
, depth
, lat
and lon
in your field dimensions
, and then set field_chunksize-(1,1,3059, 4320)
.
It would be good to try out those configurations with the new version parcels=2.1.5
. Please get back to us with your new feedback then.
Cheers, Christian
Dear Christian
We have been using the work around you mentioned since we reported the bug; explicitly setting the chunk size worked well.
Unfortunately, it will take me a little while to test the new build. Because of Coronavirus, I dispersed my lab computers to my graduate students, and so the computer with the model data is sheltering in place with my graduate student who is doing the particle tracking for his thesis (for now in Tracmass, the FORTRAN version). In a week or so, I should be able to walk a hard drive over to his place and get the data, and then I will test.
Thank you! for your efforts. Jamie
Dear @JamiePringle , Have you had by now the chance to fetch the harddrive and test the new version ? Otherwise we would close the issue for now and wait for your reports later again. Just be aware to use the new versions (2.1.5 or later) when re-running your local simulations again. Cheers, Christian
Dear Christian -- this had slipped my mind. I will re-test this afternoon or tomorrow.
Jamie
On Mon, May 11, 2020 at 9:28 AM Christian Kehl notifications@github.com wrote:
Caution - External Email
Dear @JamiePringle https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FJamiePringle&data=02%7C01%7CJames.Pringle%40unh.edu%7C753dc8f41e3143d136e608d7f5af3be7%7Cd6241893512d46dc8d2bbe47e25f5666%7C0%7C0%7C637248005287812419&sdata=Kezayml5RSAZaeaFgQgQu9cvPMFvOFZMuuj5ElwAb4Q%3D&reserved=0 , Have you had by now the chance to fetch the harddrive and test the new version ? Otherwise we would close the issue for now and wait for your reports later again. Just be aware to use the new versions (2.1.5 or later) when re-running your local simulations again. Cheers, Christian
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FOceanParcels%2Fparcels%2Fissues%2F761%23issuecomment-626701851&data=02%7C01%7CJames.Pringle%40unh.edu%7C753dc8f41e3143d136e608d7f5af3be7%7Cd6241893512d46dc8d2bbe47e25f5666%7C0%7C0%7C637248005287812419&sdata=MXAfUMRfjYMsniq7PIDctYsbtZWvLrNOOayJgizg4%2BA%3D&reserved=0, or unsubscribe https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FADBZR26L6E7IHMRHKSXUGTLRQ74Q5ANCNFSM4K3WV3BA&data=02%7C01%7CJames.Pringle%40unh.edu%7C753dc8f41e3143d136e608d7f5af3be7%7Cd6241893512d46dc8d2bbe47e25f5666%7C0%7C0%7C637248005287822414&sdata=npB38DoVLSuIYkCf0t6ZByh%2FxZ50TBoxflLqJMqfj0A%3D&reserved=0 .
I checked this in 2.2.0 and the issue is no longer present. Thanks!
Closed and thank you!
On Mon, May 11, 2020 at 9:28 AM Christian Kehl notifications@github.com wrote:
Caution - External Email
Dear @JamiePringle https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FJamiePringle&data=02%7C01%7CJames.Pringle%40unh.edu%7C753dc8f41e3143d136e608d7f5af3be7%7Cd6241893512d46dc8d2bbe47e25f5666%7C0%7C0%7C637248005287812419&sdata=Kezayml5RSAZaeaFgQgQu9cvPMFvOFZMuuj5ElwAb4Q%3D&reserved=0 , Have you had by now the chance to fetch the harddrive and test the new version ? Otherwise we would close the issue for now and wait for your reports later again. Just be aware to use the new versions (2.1.5 or later) when re-running your local simulations again. Cheers, Christian
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FOceanParcels%2Fparcels%2Fissues%2F761%23issuecomment-626701851&data=02%7C01%7CJames.Pringle%40unh.edu%7C753dc8f41e3143d136e608d7f5af3be7%7Cd6241893512d46dc8d2bbe47e25f5666%7C0%7C0%7C637248005287812419&sdata=MXAfUMRfjYMsniq7PIDctYsbtZWvLrNOOayJgizg4%2BA%3D&reserved=0, or unsubscribe https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FADBZR26L6E7IHMRHKSXUGTLRQ74Q5ANCNFSM4K3WV3BA&data=02%7C01%7CJames.Pringle%40unh.edu%7C753dc8f41e3143d136e608d7f5af3be7%7Cd6241893512d46dc8d2bbe47e25f5666%7C0%7C0%7C637248005287822414&sdata=npB38DoVLSuIYkCf0t6ZByh%2FxZ50TBoxflLqJMqfj0A%3D&reserved=0 .
Dear Erik, Phillipe and others --
I have found a bug, I think, in the chunking code when using "field_chunksize='auto'" in FieldSet.from_nemo() for netCDF files with certain chunking patterns already defined. I have been looking at oceanparcels version 2.1.4.
I am reading Mercator 1/12 degree global model run. The netCDF files have been chunked with chunksize=(1, 3059, 4320). The last two dimensions are the same size as the grid; this was done to optimize reading them into tracMass. When I try to load this data with FieldSet with "field_chunksize='auto'" (or with nothing, since this is the default) the code dies in the chunk_data() method of Field. But I think I have traced the problem down to the chunking code that is part of the data_access() method of NetcdfFileBuffer().
In the data_access() method, there is this code
if self.field_chunksize == 'auto' and data.shape[-2:] == data.chunksize[-2:] and not self.chunking_finalized: self.chunking_finalized = True
My runs take this code path, since the last two dimensions of the chunking match the dimensions of the data... But the first dimension does not match the chunk size for the first dimension. If I explicitly specify a field_chunksize (to that of the netcdf file or any other value), it does not take this code path and particles are successfully tracked.
I have not made a pull requests, because my attempts to fix this have broken other things... and there is an easy work around of specifying a chunksize manually. (field_chunksize=False also works but is sub-optimal for all the reasons you explain in the release notes :-).
My apologies for not putting in a pull request, but I find the logic of the chunking code somewhat opaque after a days inspection. I hope finding this bug was useful.
If you download the files and the code in https://unh.box.com/s/1ohb10r3b2ejzbv11fb2k1mg4ep2fvpv it will recreate the error, and if you uncomment the FieldSet.from_nemo() with the explicit chunksize, it should work.
Jamie