COSIMA / cosima-recipes

Example recipes for analyzing model output using the cosima-cookbook infrastructure
https://cosima-recipes.readthedocs.io
Apache License 2.0
43 stars 65 forks source link

Investigate issues associated with .zarr format of new Parcels releases #384

Open hrsdawson opened 2 weeks ago

hrsdawson commented 2 weeks ago

Newer versions of Parcels output trajectory data in .zarr format, rather than .netcdf. In some (all?) cases, this may lead to the creation of many, many files clogging NCI projects on Gadi.

To do:

anton-seaice commented 2 weeks ago

The recipe at the moment puts the output in scratch:

dir = ! echo /scratch/$PROJECT/$USER/particle_tracking

and Parcels docs recommend Zarr (https://docs.oceanparcels.org/en/latest/examples/tutorial_output.html#Reading-the-output-file), so maybe we can just add a note about this (i.e. why we are using scratch, warning about lots of files).

adele-morrison commented 2 weeks ago

I think what Hannah was getting at is we need to provide an example of how to postprocess the zarr files to reduce the file numbers before they are transferred to gdata. We recently had an example of a relatively small particle tracking project (only thousands of particles) that resulted in >4 milllion files stored on gdata. In that case, each particle had a separate file for each of lon, lat, depth, time etc at EVERY time/position!

I’m not sure if that’s the default for Parcels now, because our notebook example uses an old parcels version that saved in netcdf. So as Hannah says above, first step is to check what the new default does in terms of number of files and what options there are for reducing file numbers if necessary if the default is bad.

anton-seaice commented 2 weeks ago

Oh I see! That's definitely problematic.

I’m not sure if that’s the default for Parcels now, because our notebook example uses an old parcels version that saved in netcdf.

Our example is up to date - We moved to Parcels 3 at the end of last year, when 'conda-analysis' moved over to Parcels 3, so the example is using zarr already.

hrsdawson commented 2 weeks ago

Okay, well in that case maybe we just need to: 1) Do what @anton-seaice suggested and add a more explicit warning. Maybe in/before cell 4(?). Because although it's using scratch, it doesn't provide a reasoning as to why and does say "change to any directory you would like" - oops, probably my bad when this example was first created.
2) Provide a link to the Parcels example for consolidating files and provide a step-through example in the recipe of how to do this, before moving trajectory data to gdata?

hrsdawson commented 2 weeks ago

@anton-seaice is there anything else you think would be worth updating too?

anton-seaice commented 2 weeks ago

Sounds good - its also possible that changing the

outputdt – Interval which dictates the update frequency of file output

argument in the instance of ParticleFile (https://docs.oceanparcels.org/en/latest/reference/particlefile.html#module-parcels.particlefile) would reduce the number of files produced, but would need some experimentation.

2. Provide a link to the Parcels example for consolidating files and provide a step-through example in the recipe of how to do this, before moving trajectory data to gdata?

This article makes a good point - storing tracks (i.e. lines) is more efficient in a vector format (i.e. GeoJSON, KML etc) than storing in a raster format (i.e. Netcdf). I don't know how much we want to mess with that, but whatever we do, compressing the output will most likely save a lot of space.

Thomas-Moore-Creative commented 2 weeks ago

Aloha, better understanding how we can / could use zarr on Gadi is indeed an important issue right at the moment.

I need to tackle it here: https://github.com/Thomas-Moore-Creative/Climatology-generator-demo/issues/12 and intend to employ Zarr ZipStore. There are apparently some limitations and important details but I can't speak to them fully until I try it myself.

A few further throw away comments for consideration: