Closed ThomasHaine closed 5 years ago
I don't have access to files in you repositories. You can put it in the shared repository on SciServer (OceanSpy_tutorials) or export in markdown format and paste here in github issues. For Jupyter Lab is under file - export Notebook as. In Jupyter Notebook I think it's under Donwload as. You can see a preview of your github issues (top-left of the comment, close to your profile picture).
Hope this works:
Python 3 %%time
import oceanspy as ospy
%%time od = ospy.open_oceandataset.EGshelfIIseas2km_ASR()
%%time
time_range = '2008-08-29T15:00:00' XRange = od.dataset['X'].values YRange = od.dataset['Y'].values
ZRange = od.dataset['Z'].where(od.dataset['Z'].values>-3000).values
ZRange = None
od_snapshot = od.cutout(varList=['Temp','S','U','V','W', 'fCoriG', 'rA','rAz', 'dxC','dxF','dyC','dyF','dxG','dyG','dyU','dxV', 'HFacC','HFacW','HFacS'], timeRange=time_range, XRange=XRange, YRange=YRange, ZRange=ZRange)
od_snapshot = od_snapshot.merge_potential_density_anomaly() od_snapshot = od_snapshot.merge_eddy_kinetic_energy() od_snapshot = od_snapshot.merge_Ertel_potential_vorticity() od_snapshot = od_snapshot.merge_Okubo_Weiss_parameter()
Is your goal extract the whole domain, Zs shallower than 3000m on August 29?
If yes, this would be the way to do it:
od = ospy.open_oceandataset.EGshelfIIseas2km_ASR()
od_snapshot = od.cutout(varList=['Temp','S','U','V','W',
'fCoriG',
'rA','rAz',
'dxC','dxF','dyC','dyF','dxG','dyG','dyU','dxV',
'HFacC','HFacW','HFacS'],
timeRange='2008-08-29T15:00:00',
ZRange=[0, -3000])
Many thanks. That helps!
I'm confused by cutout! I want to give examples of several ways to make sub-samples by defining different (X,Y,Z,Time) ranges. I run into errors if a subset of Z levels is cutout.
See: https://apps.sciserver.org/dockervm39/4297345d-355d-11e9-8a88-5254001d4703/lab/tree/Storage/Thomas.Haine/persistent/OceanSpy_dev/OceanSpy_twnh2_to_Mattia.ipynb
It gives: ValueError: zero-size array to reduction operation minimum which has no identity
It works for all Z levels.
I want to run the notebook interactively on a small subset, then queue it for a big subset. I'm trying to include all the functionality in compute. [And learn python...]