Open AndresSepulveda opened 3 years ago
I am not sure why it did not work, but there could be some other parameters that might need to be adjusted when truncating a file.
However, there is an undocumented feature to import a subset of times and particles from a large file:
o = opendrift.open(<file>, times=np.arange(0, 100), elements=np.arange(0, 50))
See also this possibility: https://opendrift.github.io/gallery/example_huge_output.html
Hi, I am doing this to open a file and do a plot/animation
Python 3.9.2 | packaged by conda-forge | (default, Feb 21 2021, 05:02:46) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information.
However, I get the error
14:47:41 DEBUG opendrift.models.basemodel: Adding 17 config items from basemodel 14:47:41 DEBUG opendrift.models.basemodel: Adding 4 config items from basemodel 14:47:41 DEBUG opendrift.models.basemodel: Adding 34 config items from basemodel 14:47:42 INFO opendrift.models.basemodel: OpenDriftSimulation initialised (version 1.5.6 / v1.5.6-69-gb550620) 14:47:42 DEBUG opendrift.models.basemodel: Adding 13 config items from oceandrift 14:47:42 DEBUG opendrift.models.basemodel: Overwriting config item seed:z 14:47:42 DEBUG opendrift.export.io_netcdf: Importing from sml.nc Traceback (most recent call last): File "", line 1, in
File "/home/matlab/opendrift/opendrift/init.py", line 76, in open
o.io_import_file(filename, times=times, elements=elements)
File "/home/matlab/opendrift/opendrift/export/io_netcdf.py", line 262, in import_file
filetime = infile.variables['time'][times]
File "src/netCDF4/_netCDF4.pyx", line 4396, in netCDF4._netCDF4.Variable.getitem
File "/home/matlab/miniconda3/envs/opendrift/lib/python3.9/site-packages/netCDF4/utils.py", line 267, in _StartCountStride
raise IndexError(msg)
IndexError: integer index exceeds dimension size
This file comes from a very large file,
:history = "Thu Apr 8 22:16:10 2021: ncks -d trajectory,1,1000 loco_ancud_20000101_to_20010330_sml.nc sml.nc\nThu Apr 8 13:48:19 2021: ncks -d time,1,100 loco_ancud_20000101_to_20010330.nc loco_ancud_20000101_to_20010330_sml.nc\nCreated 2021-04-06 10:31:47.899994" ;
too large to be opened
numpy.core._exceptions.MemoryError: Unable to allocate 351. GiB for an array with shape (1296000, 3234) and data type [('ID', '<i4'), ('status', '<i4'), ('moving', '<i4'), ('age_seconds', '<f4'), ('origin_marker', '<i2'), ('lon', '<f4'), ('lat', '<f4'), ('z', '<f4'), ('wind_drift_factor', '<f4'), ('terminal_velocity', '<f4'), ('x_sea_water_velocity', '<f4'), ('y_sea_water_velocity', '<f4'), ('x_wind', '<f4'), ('y_wind', '<f4'), ('upward_sea_water_velocity', '<f4'), ('ocean_vertical_diffusivity', '<f4'), ('sea_surface_wave_significant_height', '<f4'), ('surface_downward_x_stress', '<f4'), ('surface_downward_y_stress', '<f4'), ('turbulent_kinetic_energy', '<f4'), ('turbulent_generic_length_scale', '<f4'), ('sea_floor_depth_below_sea_level', '<f4'), ('land_binary_mask', '<f4')]
Why doing a subsection of time and trajectories didn't worked?
Regards,
Andrés