Closed hwietfeldt closed 4 months ago
What are the statuses with _get_peaking_factors
and _get_peaking_factors_no_tci
? It looks like the code for calculating Te_PF
is already there but ne_PF
& pressure_PF
hasn't been done yet. Also what's the difference between the "normal" and the "_no_tci" versions?
The code for calculating Te_PF
is already there but is full of bugs on the dev
branch (python version 3.10.13), returning NaNs. There doesn't seem to be a difference between _get_peaking_factors
and _get_peaking_factors_no_tci
except that _get_peaking_factors_no_tci
has fewer bugs.
I've fixed these bugs on my branch, so I'm now getting output for Te_PF, although I'm uncertain about a couple interpolation steps. I also talked to @AlexSaperstein yesterday about using ECE instead of Thompson Scattering to calculate Te peaking for much faster time resolution, which would be helpful for radiation events on C-Mod.
I guess what we can do now is to first compare your Te_PF outputs with MATLAB routine/SQL database. Can you push your edits to a new branch?
Regarding using ECE, I think we can take a look at it first and see how the results compare with the current method using TS. After that we can decide whether to incorporate it as a new method or to add an argument to the current method. Also from my understanding ECE can only give you Te but not ne -- is that correct? If that's the case then I'm more in favor of adding it as a separate function. @gtrevisan
I pushed my changes onto the branch hwietfeldt/Te_PF_fix
. There's still a bug where the Te_PF at the first EFIT time is a NaN. I believe this is due to the interpolation logic: upsampling z0, a, and kappa to the TS timescale, performing intermediate calculations, and then downsampling to the EFIT timescale using linear interpolation schemes, which propagates a NaN to the first EFIT point.
I also forgot to mention that I'm confused by the following lines of code in _get_peaking_factors_no_tci()
in the dev
branch:
z_arr = np.linspace(z0[itimes[i]], TS_z_arr[-1], len(Ts_z_arr))
Te_arr = interp1(TS_z_arr, Te_arr, z_arr)
This seems to cut off all the points with z
btw I found some reference regarding TCI (two-color interferometer): https://www2.psfc.mit.edu/research/alcator/pubs/APS/aps2004/kirill.pdf https://library.psfc.mit.edu/catalog/reports/2010/13rr/13rr002/13rr002_full.pdf
This is the only line I found that references tci and it was commented out a long time ago. https://github.com/MIT-PSFC/disruption-py/blob/3137f1ade7efb960812fbaad2d9cbde2b9e4fee7/disruption_py/shots/parameter_methods/cmod/basic_parameter_methods.py#L1108
I'm trying to find the geometry of TS and I found this. I guess the edge chords refer to those around the LSFC on top, whereas the rests including the three dots above the mid plane are the core chords.
One thing I'm not exactly sure -- should we assume uniform and symmetric sampling when calculating any of the peaking factors? By oversampling around the LSFC aren't we overemphasizing the contribution from the edge, and therefore lowers the denominator? I'd assume this doesn't matter as much when we analyze only a single machine; I just don't know if this would cause any problem when we try to compare data across multiple experiments. Intuitively it would make sense to use uniform sampling in flux coordinates but that would require a lot more interpolation/fitting work.
To be fair there's going to be a lot more things that could affect these analyses (e.g. machine geometry, boundary conditions, etc) so this isn't the only thing that may make cross-machine comparison difficult.
I've been trying to run the updated _get_peaking_factors_new
function this morning. I'll update on the debugging status later this afternoon.
Thanks for the geometry! Yeah, that's a good point about uniform sampling. It seems like there's two issues:
1) Can we compare these peaking factors across machines? I'd guess this will be tricky no matter what as you mention. 2) Within C-Mod experiments, if some shots have missing channels, then the ratio of core to edge channels could change shot to shot, which would affect the denominator in peaking factor values. If the number of channels does vary significantly across run days or campaigns, then we might need to interpolate onto a fixed basis to compare values across shots.
If we're only interested in changes to Te_peaking within a shot for predicting or identifying disruptions, maybe there's no need to interpolate. If we do want to compare values across shots, maybe we could normalize in some way. Or maybe we could assume that the magnetic axis position and shaping are relatively uniform across shots and interpolate onto a fixed set of vertical coordinates, not worrying about interpolating in flux coordinate space.
In response to issue (2), the ratio of core-to-all and edge-to-all should be averaging over all the chords in both the numerator and the denominator, not summing. that way the number of chords shouldn't significantly effect the ratio
Since we already have the TS fit in get_Ts_parameters()
, why not just take the peak to mean ratio of the Gaussian fit? Also, somewhat unrelated, but why is the Thomson Te fit performed in vertical coordinates and not flux coordinates?
You probably could, but I don't know if it'd actually be any simpler to implement. That method is also more sensitive to the Te-profile remaining Gaussian. Which it might often be to lowest order, but there could always be deviations from that|
As for why fit to z instead of psi, the chord position --> psi mapping is kinda complicated to calculate, and so has been avoided in the past
@yumouwei let's briefly talk again tomorrow. AFAIU:
get_peaking_factors_no_tci
functions should be removed as it's the same as the main one,get_peaking_factors
function should be fixed in terms of temperature peaking,new
function could be made immediately comparable to the get_peaking_factors
function,Here's the shot with the mismatch error from tests_against_sql.py
. The mismatch occurred at the very end of the ramp down phase prior to a disruption. Note that all of the erroneous data points are located within the high sampling frequency period.
There are two issues here that we need to address:
Te_peaking
realistic?I don't have an answer for either of these points yet. For point 1 my intuition is that disruption-py is trying to interpolate Te (which has an a lot slower sampling rate compared to EFIT18 pre-disruption) over this period of time, while the actual change of Te happened a lot more abruptly. This could be similar to the issue with Greenwald_fraction
we noticed last time.
In addition, this was the only shot that shows this mismatch among the 10 disruption-py testing shots. I didn't notice this in other similar ramp-down disruption shots.
I looked at shot 1150805015.
closed with:
The methods get_peaking_factors() and _get_peaking_factors_no_tci() have not been fully implemented. A Te peaking factor would be useful for radiative collapses. The bare bones implementation already exists in _get_peaking_factors_no_tci(). I can flesh out this method for C-Mod based on Te peaking factors from Thompson Scattering on DIII-D described in this paper: https://www.tandfonline.com/doi/full/10.1080/15361055.2020.1798589. Density and pressure peaking factors might also be worth implementing.