Open Midway-X opened 2 years ago
Solodoch et al., 2022 Supplement Information1 says: "The model is forced with the 1958-2018 JRA55-do version 1.4.0 atmospheric reanalysis, the updated version of (Tsujino et al., 2018), for three consecutive 61 year cycles. We analyze the 3rd cycle." So the experiment should be "01deg_jra55v140_iaf_cycle3". In this version, the passive tracers have monthly frequency from 1958/01/31 to 2018/12/31, which are 61 years, and daily frequency from 1958/01/01 to 1958/01/31 which is only the first month. However, the velocity field of this experiment only has monthly mean frequency. Meanwhile, I noticed that "01deg_jra55v140_ryf_9091" also has 4 passive tracers from 2126/01/31 to 2141/12/31 with monthly frequency. In this RYF experiment we have daily velocity field from 1950 to 2180 which covers the same period and we even have 3 hourly velocity field from 2141/01/01 to 2142/01/01 which overlaps the last year of 4 passive tracers.
Based on above information, I recommend using 01deg_jra55v140_ryf_9091 experiment from 2126/01/31 to 2141/12/31 (could be less than this) for our test. However, before we start, I think it might be better to ask why they did not choose this RYF version to the authors of Solodoch et al., 2022? @PaulSpence @vtamsitt @LennartBach what do you think?
Fischer et al (2022) tested the time step, mixed layer parameterization issue in the Southern Ocean (and other oceans) with Ocean Parcels. It shows that 60s time step could better show shorter oscillation frequencies than 1 hour timestep with an updated every 5 days velocity field. It also shows that the Markov_0 kernel (could be found in kernels.py) has better performance in mixed layer physics compared with the microplastics observation.
I do not think the 60s time step is an easy choice because it will cost much more resources than the 90min or 60min time step we used to. However, I think it is worth trying to see what will happen. Meanwhile, I recommend setting a series of time step tests to compare. For example, from 60s, 5mins, 15mins, 45mins to 90mins. To fully test the time step and mixed layer parameterization, I think in this test experiment series, we should not include any extra mixed layer parameterization.
For mixed layer parameterization, I think we could try Markov_0 kernel and the current mixed layer shuffling kernel from HD for 60s, 15mins and 90mins tests.
That is my initial idea, looking for your further suggestion and comments! @PaulSpence @vtamsitt @LennartBach
A few suggestions i) Check that the tracers in 01deg_jra55v140_ryf_9091 are in AABW source regions, similar to the IAF Solodoch study. ii) Compare AABW trajectories from the 3hrly vs daily velocity data to the RYF dye tracers iii) If we get at reasonable result, then later we can play with parcel dt and mixing schemes for sensitivities.
A few suggestions i) Check that the tracers in 01deg_jra55v140_ryf_9091 are in AABW source regions, similar to the IAF Solodoch study. ii) Compare AABW trajectories from the 3hrly vs daily velocity data to the RYF dye tracers iii) If we get at reasonable result, then later we can play with parcel dt and mixing schemes for sensitivities.
Update: 01deg_jra55v140_ryf_9091 tracers are not available since it is only designed for model testing and rerun for several times. AK and AH suggested we use daily velocity field from 01deg_jra55v140_iaf, which covers from 1987 to 2019 (of course this model also has monthly field). And 1958-2019 for monthly data. We still need 4 dye tracers from 01deg_jra55v140_iaf_cycle3, which is from 1958-1963(daily) and 1958-2019 (monthly).
Description: The trajectory and tracer achieve the most similar pattern in about 200m deep layer. Shallower than 200m, the spreading of particles is much more wide and radical than tracers (I assume only the uppermost layer is regularly reset to zero, this is my current understanding from Solodoch et al., 2022). For deeper than 200m like 1000m depth layer and deeper layers, the spreading of particles is much narrow and conservative than tracers.
Idea 1: calculate the probability density function of trajectories as in (Van Sebille et al., 2013). Then, we could multiply (or divide, same thing) this PDF field by a coefficient to let the (PDF – tracer concentration field) reach the lowest value or variance.
Idea 2: sum all probability density function of trajectories and tracer concentration in all grid box in the 3-d space in the whole simulation area at the same time. Then, normalize their sum to 100% (or 1) respectively. Now both these 2 normalized fields are dimensionless and show the pathway of bottom water sinking and spreading so they are comparable. However, both ideas have a weakness: the dye injecting is continuously so obviously the closer to source, the higher tracer concentration we have. This factor is not existing in our single seeding (or multiple but in the beginning).
To overcome this weakness, I got idea 3: do not use probability density function from trajectories but use particles’ final position. Then compare these positions with Δ(tracer concentration), which is concentration(t2) - concentration(t1). I think this is more comparable from its actual meaning: they are both the current position for the particles/ dye at the first time.
The picture has not been checked and I have some questions about its correctness. Obviously, these particles are too shallow for 2 years (maximum depth they achieved is less than 300m).
For you reference, I show 3 example trajectory depth variation: z[1,:] array([0.54128075, 0.44135606, 0.24004458, 0.24004458, 1.7472634 , 8.461469 , 8.461469 , 7.9442143 , 7.340628 , 8.461469 , 7.9442143 , 7.340628 , 8.461469 , 7.9442143 , 7.340628 , 8.461469 , 7.9442143 , 7.340628 , 8.461469 , 7.9442143 , 7.340628 , 8.461469 , 7.9442143 , 7.340628 , 8.461469 , 7.9442143 , 7.340628 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 , 7.340628 , 7.3322783 , 6.3668513 , 3.9432046 , 3.6970658 , 3.0679257 , 3.737352 , 3.7330825 ], dtype=float32)
z[19876,:] array([0.54128075, 1.4577172 , 1.7232739 , 1.7232739 , 1.8019481 , 1.8157074 , 1.8157074 , 1.5055716 , 1.4573966 , 1.8157074 , 1.5055716 , 1.4573966 , 1.8157074 , 1.5055716 , 1.4573966 , 1.8157074 , 1.5055716 , 1.4573966 , 1.8157074 , 1.5055716 , 1.4573966 , 1.8157074 , 1.5055716 , 1.4573966 , 1.8157074 , 1.5055716 , 1.4573966 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 , 1.4573966 , 1.6932005 , 2.436898 , 2.691515 , 1.8595737 , 1.564193 , 1.7136767 , 1.7379292 ], dtype=float32)
z[20001,:] array([0.54128075, 4.9861674 , 4.792145 , 4.792145 , 5.418096 , 5.0981593 , 5.0981593 , 5.5168014 , 5.614249 , 5.0981593 , 5.5168014 , 5.614249 , 5.0981593 , 5.5168014 , 5.614249 , 5.0981593 , 5.5168014 , 5.614249 , 5.0981593 , 5.5168014 , 5.614249 , 5.0981593 , 5.5168014 , 5.614249 , 5.0981593 , 5.5168014 , 5.614249 , 5.614249 , 6.377794 , 7.376649 , 7.359643 , 6.0937138 , 5.1985803 , 4.551212 , 4.9639645 , 5.614249 , 6.377794 , 7.376649 , 7.359643 , 6.0937138 , 5.1985803 , 4.551212 , 4.9639645 , 5.614249 , 6.377794 , 7.376649 , 7.359643 , 6.0937138 , 5.1985803 , 4.551212 , 4.9639645 , 5.614249 , 6.377794 , 7.376649 , 7.359643 , 6.0937138 , 5.1985803 , 4.551212 , 4.9639645 , 5.614249 , 6.377794 , 7.376649 , 7.359643 , 6.0937138 , 5.1985803 , 4.551212 , 4.9639645 , 5.614249 , 6.377794 , 7.376649 , 7.359643 , 6.0937138 , 5.1985803 , 4.551212 , 4.9639645 , 5.614249 , 6.377794 , 7.376649 , 7.359643 , 6.0937138 ,
5.1985803 , 4.551212 , 4.9639645 , 5.614249 , 6.377794 ,
7.376649 , 7.359643 , 6.0937138 , 5.1985803 , 4.551212 ,
4.9639645 , 5.614249 , 6.377794 , 7.376649 , 7.359643 ,
6.0937138 , 5.1985803 , 4.551212 , 4.9639645 , 5.614249 ,
6.377794 , 7.376649 , 7.359643 , 6.0937138 , 5.1985803 ,
4.551212 , 4.9639645 , 5.614249 , 6.377794 , 7.376649 ,
7.359643 , 6.0937138 , 5.1985803 , 4.551212 , 4.9639645 ,
5.614249 , 6.377794 , 7.376649 , 7.359643 , 6.0937138 ,
5.1985803 , 4.551212 , 4.9639645 , 5.614249 , 6.377794 ,
7.376649 , 7.359643 , 6.0937138 , 5.1985803 , 4.551212 ,
4.9639645 , 5.614249 , 6.377794 , 7.376649 , 7.359643 ,
6.0937138 , 5.1985803 , 4.551212 , 4.9639645 , 5.614249 ,
6.377794 , 7.376649 , 7.359643 , 6.0937138 , 5.1985803 ,
4.551212 , 4.9639645 ], dtype=float32)
I simplified 75 layers in ACCESS to 17 layers. For Lagrangian trajectories, I plot the number of particles present in each 0.1 degree grid box, which is the numerator of probability density function. For the passive tracer result, I plot the ACCESS data directly, which is the concentration in each ACCESS default unit. Plots for one time but different depths are packaged in one .pdf file.
File naming rules: D87 = Daily velocity and start at 1987-01-16 M87 = Monthly velocity and start at 1987-01-16 M58 = Monthly velocity and start at 1958-01-16 MLS = Mixed layer shuffling on (wmax = 0.01m/s) D_87_Time_len=0_yrs_6mths.pdf D_87_Time_len=2_yrs_0mths.pdf D_87_Time_len=5_yrs_0mths.pdf D87_MLS_Time_len=0_yrs_6mths.pdf M58_MLS_Time_len=2_yrs_0mths.pdf M58_MLS_Time_len=5_yrs_0mths.pdf M58_Time_len=0_yrs_6mths.pdf M58_Time_len=2_yrs_0mths_Depth.pdf M87_Time_len=2_yrs_0mths.pdf
Weddell_TRACER_time_from_58=0_yrs_6mths.pdf Weddell_TRACER_time_from_58=2_yrs_0mths.pdf Weddell_TRACER_time_from_58=5_yrs_0mths.pdf
Hi Yinghuan, Just looking at this latest set of plots, and wondering why the daily 5 year particles (D_87_Time_len=5_yrs_0mths.pdf) seem to not have any particles outside the release area below the surface few 10s of meters? Is there an error here in the plots?
Quantifying the probability density function (PDF) of trajectories and make them be comparable with the passive tracer concentration
Idea 1: calculate the probability density function of trajectories as in (Van Sebille et al., 2013). Then, we could multiply (or divide, same thing) this PDF field by a coefficient to let the (PDF – tracer concentration field) reach the lowest value or variance.
Idea 2: sum all probability density function of trajectories and tracer concentration in all grid box in the 3-d space in the whole simulation area at the same time. Then, normalize their sum to 100% (or 1) respectively. Now both these 2 normalized fields are dimensionless and show the pathway of bottom water sinking and spreading so they are comparable. However, both ideas have a weakness: the dye injecting is continuously so obviously the closer to source, the higher tracer concentration we have. This factor is not existing in our single seeding (or multiple but in the beginning).
To overcome this weakness, I got idea 3: do not use probability density function from trajectories but use particles’ final position. Then compare these positions with Δ(tracer concentration), which is concentration(t2) - concentration(t1). I think this is more comparable from its actual meaning: they are both the current position for the particles/ dye at the first time.
Normalised PDF makes sense to me, as in idea 2. Is there a reason why you can't repeat the particle release every day so that there is always more input at the source (like the tracer is added continually at the source), so they are more comparable? Is this too slow computationally or am I missing something else?.
For idea 3 I'm not sure I quite understand how delta(tracer concentration) is equivalent to the final particle positions? Can you explain this more? Or perhaps if you think it's a good idea you can try it out and then demonstrate how they are comparable, as I haven't been thinking about this problem in detail as you have.
Hi Yinghuan, Just looking at this latest set of plots, and wondering why the daily 5 year particles (D_87_Time_len=5_yrs_0mths.pdf) seem to not have any particles outside the release area below the surface few 10s of meters? Is there an error here in the plots?
I think the whole experiment with daily velocity field is failed by some unknow reasons. Now I am trying to find the reason. In last week meeting we speculated that it may caused by wrong scale of velocity or time step.
Normalised PDF makes sense to me, as in idea 2. Is there a reason why you can't repeat the particle release every day so that there is always more input at the source (like the tracer is added continually at the source), so they are more comparable? Is this too slow computationally or am I missing something else?.
Actually I can repeat the particle release every day. However, it would make the experiment more complex (if I parallelly seed particles at different time in different individual experiments) or more computationally slow (if I release particles repeatedly in one experiment). And the most important thing is I do not think repeatedly seed particles is useful since Solodoch et al., 2022 keep injecting dye is because this is a easier way to highlight the route of AABW pathways. However, for our Lagrangian tool since we have whole trajectory naturally, I do not think it is really worthful to use computational resources to do this.
For idea 3 I'm not sure I quite understand how delta(tracer concentration) is equivalent to the final particle positions? Can you explain this more? Or perhaps if you think it's a good idea you can try it out and then demonstrate how they are comparable, as I haven't been thinking about this problem in detail as you have.
Yes, this is a complex method. The aim of this method is get the extending/ spreading part of tracer in specific time interval (like 1960-1-1 to 1960-2-1). Then get the trajectory also in the same time interval. In such comparison the concentration or pattern does not matter and we only compare the shape.
At the same time, I get another new idea: Keep seed particles regularly but not really in the Parcels but superposition our current particles. Specifically, firstly, we have the trajectory of full duration. Then, we 'add' the trajectory of (full duration - last 5days), then (full duration - last 10days) ...... If we assume the internal variation is negligible (we compared the different between 1958 and 1987 and there is not obvious difference), this method could let we have similar result but avoid massive Parcels experiment running. I will try to do this and hope that I will give you a result on Friday meeting.
These 2 files are what I show in meeting: Weddell_TRACER_time_from_58=30_yrs_0mths.pdf
Del_uppermost_M58_MLS_Time_len=25_yrs_0mths.pdf
This is the Passive tracer concentration map but with a 0-1 colour bar: Weddell_TRACER_time_from_58=30_yrs_0mths_narrow_cbar.pdf
However, please note that the levels in each map is not always 1, from uppermost map to bottom map, there is: levels = [1,4,3,5,5,4,3,3,3,4,3,2,3,4,8,6,14]
1. Solodoch et al., 2022 says: “Each AABW tracer is linearly restored to a value of one in the surface grid cell within its corresponding Antarctic shelf region and is destroyed at surface grid cells outside of that shelf region (with time scales of 1,000 s, and 1 day, respectively)”. From its description we know: a) Within its corresponding Antarctic shelf region, the tracer value is remained to value of one in the surface grid cell (i.e., the uppermost layer in the ACCESS model). Scales of 1,000s linearly restoration could be ignored since in our comparison experiment the advection time step is 5,400s (1.5h) and the particle sampling frequency is 432,000s (5d), which are much longer than 1,000s. As an approximation, we could treat the surface grid cell as a constant value one pool of tracer. As an approximation, we could treat the surface grid cell outside the shelf region as zero value. b) The tracer value outside the shelf region at the uppermost layer is manually set to zero every day. This is longer than advection time step but shorter than our particle sampling frequency. 2. Based on these, the idealized Lagrangian comparison experiment should have: a) A very high seeding frequency to remain the particle density in the uppermost layer in a constant level (we could treat this density value as ‘one’). b) A mechanism to terminate the particle trajectory immediately or with a high frequency once it is in the uppermost layer and outside the shelf region. 3. In passive tracer concentration field from Solodoch et al., 2022, the surface layer concentration is deleted to show the AABW pathway from shelf region to abyssal ocean. The surface concentration outside the shelf region is deleted rules the only direction of particles is along the shelf to abyssal ocean. So that seeding particles in different month should not affect the AABW pathway pattern. Besides, the interannual variability in this model is limited (monthly experiments start from 1958 and 1987 return similar patterns). These mean that we could use the trajectory seeded at one moment to approximate seedings at other times. 4. Based on all above conditions we know, in the above .pdf files, I applied the following algorithm to the 1958 seeded 25 years experiment: a) Delete particle’s following trajectory once it is outside the shelf region and at the uppermost layer. b) Overlay particles’ position every five days in the whole experiment period. Logically this is equal to the final particle distribution of seeding particle every 5 days in the whole experiment period. These 5 days re-release could be an approximate of the constant value one passive tracer concentration near the surface. c) Limitations:
i. Logically, the concentration could not be higher than one, which is its initial concentration at the source. Since the concentration is always diffuse from higher to lower concentration regions, the source region always has the highest concentration. However, in the Lagrangian experiments, we could see a higher downstream value than seeding region in the probable density function field. If we consider each particle as a incompressible (Boussinesq) seawater parcel then we will (or may?) find sometimes the equivalent density/ concentration from particles is higher than the source. My current understanding is that this is because there is no repulsive force between particles and particles’ trajectory could cross with each other. Meanwhile, for passive tracer, two water parcels could not ‘cross’ to let the concentration higher than one at one moment. ii. Above mentioned 5 days re-releasing assumes that to maintain the one concentration, Solodoch et al., 2022’s experiment needs to inject the same amount dye with the same timestep. However, this assumption may be incorrect and introduce bias. Logically, this could not affect the comparison of the spreading pattern but affect the comparison of value.
This is based on experiment with monthly velocity field from 1958
This is the configurations/ settings of sensitivity test of the performance of the Lagrangian trajectories in highly diffusive regions (e.g. within the mixed layer/euphotic zone). The accompany issue is https://github.com/Midway-X/Backward_AABW/issues/5