Open willaguiar opened 4 months ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Panan isobath is on @schmidt-christina disk. Perhaps we can move them to ik11 to avoid permission issues?
We definitely don't want our examples to be dependent on data that lives on user's personal home or g/data directories
View / edit / reply to this conversation on ReviewNB
navidcy commented on 2024-07-07T04:22:42Z ----------------------------------------------------------------
Line #1. def save_SWMT(expt, session, start_time, end_time, outpath,model='mom6', lat_north = -59, n = None):
space after commas
View / edit / reply to this conversation on ReviewNB
navidcy commented on 2024-07-07T04:22:42Z ----------------------------------------------------------------
Line #215. ds[SST.cf['latitude'].name].attrs= {'standard_name':'latitude'}
space after colons
View / edit / reply to this conversation on ReviewNB
navidcy commented on 2024-07-07T04:22:43Z ----------------------------------------------------------------
Line #228. resolution = str(0.01/(pme_net.cf['longitude'].size/3600))[2:]
what is this here doing? reads strange to me
This is an attempt to automatically get the resolution of the model, assuming a global square-cell grid. This is necessary so we can identify the correct name of the isobath file for panan (contains 3 resolutions) . So this lines calculates there resolution and convert it to string to use it on...
contour_dir = "/home/142/cs6673/work/mom6_comparison/Antarctic_slope_contours/Antarctic_slope_contour_1000m_MOM6_" + resolution + "deg.nc"
Happy to improve if you have a suggestion
_adele-morrison commented on 2024-07-07T06:49:58Z_ ----------------------------------------------------------------Maybe a dictionary with experiment names mapped to resolution is best? This wouldn't work for a regional model right?
_willaguiar commented on 2024-07-07T07:01:10Z_ ----------------------------------------------------------------True, it wouldn't if the regional model is not circumpolar. How would we map the dict to the experiment name? something like...
if expt='panant-01-zstar-ACCESSyr2': resolution='01'
? Or maybe only extracting the 01 string from the name for experiments starting with 'panant'?
_schmidt-christina commented on 2024-07-08T00:54:08Z_ ----------------------------------------------------------------
I extract the resolution from the experiment name, but that assumes we keep the naming convention:
resolution = expt.split('-')[1]
I haven't changed that one
View / edit / reply to this conversation on ReviewNB
navidcy commented on 2024-07-07T04:22:44Z ----------------------------------------------------------------
what are all these 0.3.0
?
willaguiar commented on 2024-07-07T06:41:09Z ----------------------------------------------------------------
This is a print that only appears when I run ...
cc.querying.getvar
...through VScode ssh system. It doesn't appear when I run it on ARE as it is usually done. Not sure why tho ( but other then the 0.3.0 print itself, no other difference). The new edits/submission will be run on ARE so this will not appear
View / edit / reply to this conversation on ReviewNB
navidcy commented on 2024-07-07T04:22:44Z ----------------------------------------------------------------
Line #1. def shelf_mask_isobath(var,model_dict):
space after comma
View / edit / reply to this conversation on ReviewNB
navidcy commented on 2024-07-07T04:22:45Z ----------------------------------------------------------------
space after commas
@willaguiar could you rename the PR to something more descriptive? E.g., "Convert the ..... example to be model agnostic"? Don't use SWMT... Personally I'm not 100% sure I know what that is? Is it another acronym similar to CDW, AABW, NADW, etc?
View / edit / reply to this conversation on ReviewNB
navidcy commented on 2024-07-07T04:29:28Z ----------------------------------------------------------------
diagnostics(see this
-> diagnostics (see this
View / edit / reply to this conversation on ReviewNB
navidcy commented on 2024-07-07T04:29:29Z ----------------------------------------------------------------
Line #19. import cf_xarray as cfxr
this is under "plotting"!
let's remove the ## plotting
comment? or put the import cf_xarray as cfxr
before that comment.
View / edit / reply to this conversation on ReviewNB
navidcy commented on 2024-07-07T04:29:30Z ----------------------------------------------------------------
Rephrase to "We suggest..."
Always use plural because we want the users to feel that the community is guiding them, not a particular individual.
Seems that the PR closes #341, right @willaguiar?
In that case can you edit your first comment in the PR to write closes #341
somewhere... this way the PR will be linked with the issue and the issue will be automatically closed upon merging this PR.
Also the phrase Solves issue #277
is not recognized by GitHub to automatically link the issue. You should rephrase to Closes #277
or Resolves #277
.
This is an attempt to automatically get the resolution of the model, assuming a global square-cell grid. This is necessary so we can identify the correct name of the isobath file for panan (contains 3 resolutions) . So this lines calculates there resolution and convert it to string to use it on...
contour_dir = "/home/142/cs6673/work/mom6_comparison/Antarctic_slope_contours/Antarctic_slope_contour_1000m_MOM6_" + resolution + "deg.nc"
Happy to improve if you have a suggestion
--- View entire conversation on ReviewNBThis is a print that only appears when I run ...
cc.querying.getvar
...through VScode ssh system. It doesn't appear when I run it on ARE as it is usually done. Not sure why tho ( but other then the 0.3.0 print itself, no other difference). The new edits/submission will be run on ARE so this will not appear
--- View entire conversation on ReviewNBMaybe a dictionary with experiment names mapped to resolution is best? This wouldn't work for a regional model right?
View entire conversation on ReviewNB
True, it wouldn't if the regional model is not circumpolar. How would we map the dict to the experiment name? something like...
if expt='panant-01-zstar-ACCESSyr2': resolution='01'
? Or maybe only extracting the 01 string from the name for experiments starting with 'panant'?
--- View entire conversation on ReviewNB
I extract the resolution from the experiment name, but that assumes we keep the naming convention:
resolution = expt.split('-')[1]
View entire conversation on ReviewNB
View / edit / reply to this conversation on ReviewNB
schmidt-christina commented on 2024-07-08T07:59:23Z ----------------------------------------------------------------
Line #132. pressure = xr.DataArray(p_from_z(depth_tile, lat_t), coords = [yt_ocean, xt_ocean], dims = [SST.cf['latitude'].name, SST.cf['longitude'].name], name = 'pressure', attrs = {'units':'dbar'})
Here, and whenever you use the gsw functions: There is no need to explicitly convert it into xr.DataArray anymore as the result is already a DataArray.
pressure = p_from_z(depth_tile, lat_t) pressure.attrs = {'units': 'dbar'}
Some accidental newlines (e.g. th
e water mass transformation in each model simulation)
View entire conversation on ReviewNB
I admit this has always been an example that feels impenetrable for me to grasp... There is too much hardcoded code and in a dense matter that makes it hard for me. I tried to clean it up a bit. It's not there yet, somebody should run to check it actually runs. It takes too long for my patience so I gave up on running it -- sorry... :(
Thanks all for your efforts here!
I also find it a bit hard to follow, mostly because all the magic is happening within 1 function in a really long cell. Would maybe splitting the code into components/functions make it more understandable? For example, one function to load, one function to calculate the thermal component, one for the haline, one for isopycnal binning and so on?
I think that splitting code like this would make it way more digestible for the non-initiated in SWMT. I know it is probably a lot of work to do it... so apologies for the suggestion.
It takes too long for my patience
If the slow part is the density binning, we could convert to using xhistogram or a faster binning method.
I would like to see something in the spirit of what @julia-neme suggests!
But also that could happen in a future PR...
I also find it a bit hard to follow, mostly because all the magic is happening within 1 function in a really long cell. Would maybe splitting the code into components/functions make it more understandable? For example, one function to load, one function to calculate the thermal component, one for the haline, one for isopycnal binning and so on?
I think that splitting code like this would make it way more digestible for the non-initiated in SWMT. I know it is probably a lot of work to do it... so apologies for the suggestion.
That could be interesting, but perhaps separating the SWMT function in thermal and heat components might require overlapping computing. E.g., both saline contraction (used for the salt transformation), thermal expansion (used for the heat transformation) and density binning requires SST,SSS and pressure, so we would need to import it three times, one for each function.
I particularly like the function the way that it is, cause it saves the full transformation and components (heat + salt, heat, salt) binned into density already, so one can easily loop this function along the time (I often did) using just just the memory required for each time-slice.
Happy to try to change the function tho, if people disagrees.
Ok, I did an updated version of the code where I solved most of the suggestions. However, there have been some changes in the organization of the recipes GH, so Im not sure my push went to the right place, and have not been able to do a pull request. The new code is here for review tho. (Sorry the ignorance on GH)
Ok, I did an updated version of the code where I solved most of the suggestions. However, there have been some changes in the organization of the recipes GH, so Im not sure my push went to the right place, and have not been able to do a pull request. The new code is here for review tho. (Sorry the ignorance on GH)
Nice Wilton!! Unrelated to this notebook, but have you looked at the Wiki to see how to review/push/etc? If so, have you found it useful? Looking for guineapigs to tell me if there is stuff missing or not working.
Closes #402
Ok, I did an updated version of the code where I solved most of the suggestions. However, there have been some changes in the organization of the recipes GH, so Im not sure my push went to the right place, and have not been able to do a pull request. The new code is here for review tho. (Sorry the ignorance on GH)
Nice Wilton!! Unrelated to this notebook, but have you looked at the Wiki to see how to review/push/etc? If so, have you found it useful? Looking for guineapigs to tell me if there is stuff missing or not working.
First time looking at it now! And indeed found it useful ( especially the histogram). I classify it as VERY helpful ( made me realize I needed to first updated my fork before creating the pull request). Still not sure it went the right way tho
where are we with this? @willaguiar lets us know if you don't have capacity to do the suggested changes
(I admit I didn't see closely what changes were suggested.... just saw there were a few review comments and I'm just wondering about the status of the PR.)
where are we with this? @willaguiar lets us know if you don't have capacity to do the suggested changes
(I admit I didn't see closely what changes were suggested.... just saw there were a few review comments and I'm just wondering about the status of the PR.)
I think for the last review, most suggestions were met, except the splitting of the SWMT function into heat and salt components, which I think might make the code prolix (happy to try if there is a wide disagreement)
So I'm not sure if something else is required to be done.... @schmidt-christina @julia-neme ?
Look, regarding the splitting if it's something you wanna embark do it. But if you half-do it then it might not be useful.
An other option is to merge the PR and open an issue documenting a potential improvement for future (e.g. splitting the SWMT function into heat and salt components etc).
View / edit / reply to this conversation on ReviewNB
julia-neme commented on 2024-09-11T01:27:27Z ----------------------------------------------------------------
This is not running for me, computing failing due to the following line in save_SWMT
:
T = cc.querying.getvar(expt, model_vars[model]['temperature'], session, frequency='1 monthly',chunks=model_vars[model]['chunks'])
I think using start_time and end_time solves this, but maybe there is a reason behind opening all times within the experiment?
Sorry for another request of changes Wilton! I'm happy to add the start and end times, but I'm not sure whether all dates where needed!
Closes #277 Closes #341
This is an attempt to make the surface water mass transformation partially model agnostic. I used cf-xarray to deal with operations along dimensions. To deal with the different diagnostics required by each model I added an embedded dictionary to the save_SWMT function. The function also spits the specific dictionary for that model so it can be used later for shelf masking.
Shelf masking requires the input isobath file. Currently we have these isobath files only for OM2 and Panans. Panan isobath is on @schmidt-christina disk. Perhaps we can move them to ik11 to avoid permission issues? For mom5 I also changed the diag to use pme_net instead of pme_river, according to issue #341