fsciortino / Aurora

Modern toolbox for impurity transport, neutrals and radiation modeling in magnetically-confined plasmas
https://aurora-fusion.readthedocs.io
MIT License
39 stars 22 forks source link

Issue085 line shapes #88

Open cjperks7 opened 1 year ago

cjperks7 commented 1 year ago

What's New

Using Doppler Broadening

Using Instrumental Broadening

Minimum working code (simulating He-like Kr):

import aurora
import os

# Inputs
adf15_file = os.path.join(
    '/home/cjperks/tofu_sparc/atomic_data/ADAS_PEC_files',
    'Krypton',
    'fs#0.50A_8.50A#kr34.dat'
    )
Te_eV = Ti_eV = 3e3
ne_cm3 = 1e14

dbroad = {
    'Doppler':{
        'Ti_eV': Ti_eV, # [eV]
        'ion_A': 83.8, # [amu]; Kr
        },
    'Instrumental':{
        'width_A': 11e-3 * (2.05e-4 * 290/3.84), # [AA]; spec_rng * (rc_FWHM * D_cd / w_d)
        },
    }

out = aurora.get_local_spectrum(
    adf15_file,
    ne_cm3,
    Te_eV,
    ion_exc_rec_dens=[0,1,0],
    Ti_eV=Ti_eV,
    n0_cm3=0.0,
    dlam_A=0.0,
    plot_spec_tot=False,
    dbroad=dbroad,
    )

Using Natural Broadening

Minimum working code (simulating He-like Kr):

import aurora
import os

# Inputs
adf15_file = os.path.join(
    '/home/cjperks/tofu_sparc/atomic_data/ADAS_PEC_files',
    'Krypton',
    'fs#0.50A_8.50A#kr34.dat'
    )
Te_eV = Ti_eV = 3e3
ne_cm3 = 1e14

dbroad = {
    'Doppler':{
        'Ti_eV': Ti_eV, # [eV]
        'ion_A': 83.8, # [amu]; Kr
        },
    'Natural':{
        '0.9454': 1.529e15, # [1/s]; w-line so the largest Einstein coefficient
        },
    }

out = aurora.get_local_spectrum(
    adf15_file,
    ne_cm3,
    Te_eV,
    ion_exc_rec_dens=[0,1,0],
    Ti_eV=Ti_eV,
    n0_cm3=0.0,
    dlam_A=0.0,
    plot_spec_tot=False,
    dbroad=dbroad,
    )

Using Suprathermal Ion

Minimum working code (simulating He-like Kr):

import aurora
import os

# Inputs
adf15_file = os.path.join(
    '/home/cjperks/tofu_sparc/atomic_data/ADAS_PEC_files',
    'Krypton',
    'fs#0.50A_8.50A#kr34.dat'
    )
Te_eV = Ti_eV = 3e3
ne_cm3 = 1e14 

dbroad = {
    'Suprathermal_Ions':{
        'model': 'Bi-Maxwellian',
        'Ti_eV': Ti_eV, # slow population ion temp
        'ion_A':  83.8,
        'Ti_fast_eV': 100e3, # fast population ion temp
        'f_fast': 1/2, # fraction of ions in fast population
        }
    }
out = aurora.get_local_spectrum(
    adf15_file,
    ne_cm3,
    Te_eV,
    ion_exc_rec_dens=[0,1,0],
    Ti_eV=Ti_eV,
    n0_cm3=0.0,
    dlam_A=0.0,
    plot_spec_tot=False,
    dbroad=dbroad,
    )
cjperks7 commented 1 year ago

@fsciortino Let me know if you observe any bugs are ways to improve

cjperks7 commented 1 year ago

Checks are failing because they can't find numpy?

odstrcilt commented 1 year ago

@cjperks7 I have the same issue with regression tests :( it is probably caused likely by different versions of Python used to install the numpy and the actual python. It might be possible to fixit by changing meta.yaml file, but I have not succeeded yet. So if you have any ideas, please try it.

odstrcilt commented 1 year ago

One idea for your pull request. If the instrumental function is not Gaussian, it is usually described as the sum of Gaussians. Then you can use the same trick, because the convolution of Gaussian is again just a Gaussian.

Convolution of Gaussian and Lorentz distributions is Voigt distribution https://en.wikipedia.org/wiki/Voigt_profile This can be well approximated by Pseudo-Voigt distribution, which is just a weighted sum of Gaussian and Lorentz profiles.

In conclusion, if this is used for synthetic diagnostics in Bayesian inference, where the computation time matters, all convolutions can be calculated analytically.

cjperks7 commented 1 year ago

One idea for your pull request. If the instrumental function is not Gaussian, it is usually described as the sum of Gaussians. Then you can use the same trick, because the convolution of Gaussian is again just a Gaussian.

@odstrcilt Thanks for the feedback. It should be easy to give a user a choose to model instrumental broadening for a number different Gaussians per whatever they find best fits high-quality ray-tracing.

To get to your point on Bayesian optimization, that wasn't actually my intention. I've documented my reasonings for these various extensions under my initial Issue description #85

I specifically want to use this for inferring ion temperature from high-resolution spectroscopy as a trick you can use in your tomographic inversions can very naturally handle the convolution of an arbitrary number of Gaussians representing your instrumental broadening given you know their characteristic widths alongside the typical Doppler profile (ref: Section 3 in M.L. Reinke et al, RSI 83, 113504 (2012)).

You have touched on a sort of philosophic debate that's been going on in my head as I work on ImpRad. I've been mainly working with a high-resolution spectrometer (XICS), but lines integrated over the broadening profile to ignore the specific details of how the photons are redistributed. I once had a conversation with Francesco where he said he initially was doing this though later got pressed into doing Bayesian inferencing on the whole spectrum. I recall he said that it was a worthwhile endeavor, but now that I know better I feel as though I need to question the merit of that.

To me, it seems like doing full spectrum Bayesian inferences only convolutes more uncertainties from 1) your ion temperature profile, 2) your knowledge on the instrumental broadening (therefore diagnostic geometry), and 3) the quality of your atomic data for a larger number of exotic transitions. Sure, you get constraints on other charge states via satellite lines, but I feel I can get by just fine using low-resolution VUV spectroscopy. So it seems more effort than its worth to me.

Hence why I didn't worry about speed and instead checked that my numerical method matches simple cases to analytically derive.

What do you think? Do you or @fsciortino find utility in doing full spectrum Bayesian inferences? Do you personally use it?

odstrcilt commented 1 year ago

If you care only about the number of photons, you can sum the intensity of all pixels measuring your line. The line shape does not matter. The reason why we are fitting the peak and not summing the pixels is mainly to get a better signal-noise ratio. Also, even if the fitting function is not perfect, it will be somewhere higher and somewhere lower, so on average it it will give you a very good estimate of the total integral, even of the fit is not perfect. But if you also care about the velocity and temperature, proper fitting is essential. Matt Reinke used tomography of the first three moments of the peak. Although this is simple, robust, and fast, it is not the best possible solution in a maximum posterior way. Francesco and one other Norman developed a Bayesian algorithm to calculate the moments more accurately. But the ideal approach is to describe these profiles by splines (or Gaussian processes...), and develop a synthetic diagnostic, which calculates the spectra and fit the measured spectra by the model. I was asked to develop a synthetic diagnostic for integrated data analysis from ITER XICS spectrometers, so I will try to use this approach.

cjperks7 commented 1 year ago

But the ideal approach is to describe these profiles by splines (or Gaussian processes...), and develop a synthetic diagnostic, which calculates the spectra and fit the measured spectra by the model. I was asked to develop a synthetic diagnostic for integrated data analysis from ITER XICS spectrometers, so I will try to use this approach.

Interesting, we're having similar conversations about analyzing real data for SPARC. Admittedly I've elected to go with the moments method because 1) like you said, it's easy to do and 2) I'm used to it from making CMOD Ti profiles. It's been working fairly well for a bunch of performance scoping I've been doing with fake density/temp profiles. But we do expect to explore the Bayesian approach once we start dealing with real data. I have Francesco/Norman's BSFC code, but haven't really tried using it yet.

I'm still curious as to if you still need to include instrumental broadening via this model in Aurora as part of your Bayesian analysis. I would imagine you could create a high-quality synthetic diagnostic and calculate a transfer matrix (local emissivity to counts) via volume-of-sight integration, hence automatically convoluting instrumental broadening, that you only have to do once and then can save? Or are you just sticking with line-of-sight integration that you run during the analysis?

odstrcilt commented 1 year ago

l for a bunch of performance scoping I've been doing with fake density/temp profiles.

It is possible that the moment approach will be better, but I would like to try the direct approach.

To make it fast, I plan to calculate a transfer matrix between the 2D pixel grid and local spectra on the outer midplane grid. The advantage is that this matrix will not depend on the plasma shape and it can be recalculated before the inference. So the steps are:

1) from impurity density, calculate local emissivity (or skip this step and reconstruct the emissivity directly) 2) map all profiles (emissivity, Ti, omega) from rho to Rmidplane 3) evaluate local spectra, only including the Doppler broadening (and Zeeman splitting?). Stark and natural broadening should be completely negligible. 4) either calculate analytically convolution with instrumental broadening function now or numerically later. 5) multiply the spectra by the transfer matrix 6) apply instrumental broadening if it was not done yet, 7) in the ideal case, compare with the measured spectra. In the real case, apply wavelength calibrations, intensity calibrations, and dispersion calibrations. Although this could already be included in the transfer matrix... Convolution with instrumental function can also be in the transfer matrix. The transfer matrix should describe how a narrow line at a single R location contributes to a single pixel in the 2D array of the detector. This is a linear map; it can be described as a matrix.

What do you mean by "volume-of-sight integration, hence automatically convoluting instrumental broadening" What dominates instrumental broadening in XICS? Can you measure the instrumental broadening by some calibration source? For example, the instrumental broadening in visible light CER system is dominated by optics and slit width. It must be measured.

cjperks7 commented 1 year ago

By "volume-of-sight integration, hence automatically convoluting instrumental broadening", I mean pretty much what you mean by what you described for calculating your transfer matrix. Particularly, I've done ray-tracing to account for finite beam width as well as the distribution of wavelengths a single pixel is sensitive to. So the latter effect builds in instrumental broadening. My transfer matrix is basically in units [m^3*AA] to yield a photon flux per pixel.

For Natural broadening, it can/cannot matter depending on 1) impurity you're looking at, 2) transition you're looking at, and 3) your ion temperature. For instance, in SPARC we're relying on a three sets of von Hamos spectrometers. One to image Ne-like Xe (particularly the 3D line), one to image He-like Kr (either particularly the w or z line), and another focused on just tracking some metals (W, Fe, Ni, Cu). The Einstein coefficient (setting the Lorentzian width) scales ~Z^4 so already it's important to pay attention to Natural in a fusion power plant where you can't rely on the typical Ar.

A quick scoping on just comparing FWHM of a Doppler Gaussian and a Natural Lorentzian showed me that for low-temperature operations they can be comparable for the Ne-like Xe 3D line and He-like Kr w line. Completely negligible for the He-like Kr z line (but it's about half as bright as the w line...). Hence why I started this Issue so I could include it and quantify the error in my reconstructions

Screenshot 2023-09-21 at 4 52 39 PM

For Zeeman splitting, I haven't really thought about it much. I asked John Rice once and he said you can completely ignore it for x-rays so haven't moved forward. If you have data to the contrary then I'd love to know.

For instrumental broadening in XICS, it's like you said, optics and slit width. But importantly the rocking curve of the crystal which you want wide to increase etendue, but not too wide. Particularly for SPARC, we've played some games for space reservation conflicts basically defocusing the von Hamos spectrometers so that has made our instrumental broadening non-uniform.

cjperks7 commented 1 year ago

Therefore my transfer matrix is really a discretized version of $\intV dV \int\lambda d\lambda$ for a local emissivity spectrum in units $[ph/s/cm^3/\AA]$

cjperks7 commented 1 year ago

And oh yeah, we plan to have at least one calibration source (money fights...). Probably then assume it applies the same to all the spectrometers. Another theoretical method we'll try is comparing the broadening of bright lines from different impurities (hence mass).

The details haven't been flushed out at this stage. We're just post-PDR now.

odstrcilt commented 1 year ago

This is very interesting, I didn't know that the Einstein coefficients have such a steep Z dependence.

fsciortino commented 1 year ago

hey guys, interesting dicussion -- sorry for not being so engaged. To reply to @cjperks7 , I think that if you want to really do the highest quality assessment of impurity transport coefficients, you need to avoid model inadequacies in all the possible forms. This is practically impossible, but one must try. Doing full-spectrum analysis helped me to understand what were systematic errors and where my analysis was actually at fault. I found it educational, but actually a lot of work. Depending on your objective, I would stay away from this level of detail... but it's not a trivial choice. It may even vary for different devices, where other lines may pop up and you may be underestimating them at first. For example, I remember that the K-alpha spectrum of Ar had some annoying Mo lines in the middle - I did extensive analysis to convince myself that it was all fine in my case. You cannot come to such conclusions if you don't look at the details. It's not as bad in the VUV, because there you have many more lines and full fitting is anyway a waste of time (you can only hope that no other lines are overlapping, based on a lot of experiments with different non-intrinsic species).

In ITER and SPARC, lots of W will be flying around. IIRC, the W spectrum for XICS is quite dangerous because of potential overlaps... I remember discussing this with John. Sine @cjperks7 is working with him, I'm sure you discussed this extensively. Worth looping in @odstrcilt on these challenges, since he's working on the ITER XICS. The folks at WEST have seen some of the issues in experiment, I believe. W7-X also has some W experiments (W injections).

cjperks7 commented 1 year ago

@fsciortino that's a very good observation. Indeed with the Ne-like Xe spectrum (useful for low-temp regions) we are finding that there's a nearly degenerate line emitted from Al-like W with the 3D line (brightest) that we'd like to use for Ti measurements. The Te ranges that these charge states are significantly populated at least only slightly overlap so it's part of the decision making when to trust those channels over the He-like Kr (useful for high-temp regions) per the moments approach.

The He-like Kr spectrum has so far appeared clean of pollutants. The Li-like satellites are shifted around differently than in the K-alpha spectrum for Ar you mention, so may/may not cause strife depending on if we rely on the w line (which may/may not have a lot of natural broadening error already) or z line (which is a little less bright so signal-to-noise concerns). So another sort of decision making of moments v. Bayesian we have to make once we have real data. It's so convenient when you have fake data to turn off the troubling bits haha! Hence I've enjoyed the moments method for quickly scoping performance per diagnostic design.

I do hope to bring ImpRad to WEST one day (once they give me shot time....) to run new experiments for my ICRH-impurity pumpout project I have on CMOD data. ImpRad has been showing some very exciting results! So you are reminding me that I probably need to worry about full spectrum modeling there.... Especially because I want to focus on some of those polluting W lines.

@odstrcilt and yeah, I would find it very beneficial to keep a collaboration on this stuff open. I've had a few very interesting conversations with other people working on ITER XICS that I've reflected on and applied to my own work. A particular conversation I enjoyed with Oleksandr Marchuk were they pointed an important population mechanism in atomic data modeling we both were independently doing for a few Ne-like W lines that I had been missing. Which is helpful because we both want to use that spectrum.

cjperks7 commented 8 months ago

To anyone interested, I swear I'll get to closing this out in a week or two, but adding to my to-do list

odstrcilt commented 7 months ago

Finally, I have time to work on the synthetics CIXS diagnostics for ITER. @cjperks7, would you recommend some literature for me to read?

So far, I have found these: This is the best publication about CIXS diagnostics: Beiersdorfer, P., et al. "The ITER core imaging x-ray spectrometer." Journal of Physics B: Atomic, Molecular and Optical Physics 43.14 (2010): 144008. About the W atomics data Beiersdorfer, Peter, Joel Clementson, and Ulyana I. Safronova. "Tungsten data for current and future uses in fusion and plasma science." Atoms 3.2 (2015): 260-272. About a microcalorimeter as an alternative to X-ray spectrometer: Beiersdorfer, P., et al. "The ITER core imaging x-ray spectrometer: X-ray calorimeter performance." Review of Scientific Instruments 81.10 (2010).

Matt Reinke's RSI 2012 paper about the Alcator C-mod system was also very interesting.

cjperks7 commented 7 months ago

@odstrcilt yeah, sure I can. How about I follow up in an email since I imagine we could have quite a long conversation? Any other interested party with visibility on this PR speak now if you want to be CC-ed.

As a first go, what you list are pretty big ones. I have some more on CIXS and microcalorimeters that I can share. More theory on the former as for SPARC we were far in the design process before considering a microcalorimeter and ended up ruling it out based on available space.

For W atomic data, I should advertise I have EBIT experiments at LLNL coming up in the Spring to look at this for polluting a Ne-like Xe spectrometer that you'd probably want to take advantage of. I know ITER people (I think Novimir was PI) have other EBIT data I want as well.

Anyway, as a heads up I'll be posting a report either this evening or tomorrow morning implementing your feedback and opening this PR back up for review. last bits of debugging. I'll tag you and Francesco when I do.

cjperks7 commented 7 months ago

@odstrcilt @fsciortino Okay, this branch should be ready for another round of reviews. What's new:

Instrumental Broadening

Voigt profiles

Fake ADF15 transition wavelength mesh

waveA = np.r[2.72, 2.73]

Fake plasma conditions

Ti_eV = 4e3 mi = 131.29 AA = 1e15

Dictionary of broadening physics

dbrd_V = { 'Voigt':{ 'Ti_eV': Ti_eV, # [eV], ion temperature 'ion_A': mi, # [amu], ion mass

'key_options': 'wavelength', # [AA] ADF15 transition wavelength

    #'2.72': AA,       # [1/s], fictious Einstein coefficeint
    #'2.73': AA,        # [1/s], fictious Einstein coefficeint
    'key_options': 'isel',       # ADF15 isel index
    '1': AA,       # [1/s], fictious Einstein coefficeint
    '2': AA,       # [1/s], fictious Einstein coefficeint
    },
}

Calculates profiles

out_pV = lb.get_line_broaden( dbroad = dbrd_V, wave_A = wave_A, use_scipy=False, use_pseudo=True, # Use weighted-sum method ) out_V = lb.get_line_broaden( dbroad = dbrd_V, wave_A = wave_A, use_scipy=True, # Use scipy built-in function use_pseudo=False, )


- Note that I added the option to related the Einstein coefficient to the ADF15 transition either by the wavelength or isel index
- Also note that if the issue with the weight-sum method is not resolved, I may or may not delete it but I'll definitely bury the control options to switch to it that you see above

### 2-photon emission

- Since I have collisional-radiative modeling going that calculates the population density for states that decay by 2-photon emission, I figured I might as well deal with that by including it in the spectrum modeling
- I have implemented two methods to do this 1) using a published data table that ChiantiPy gives tools to interface with and 2) use a published analytic formula like they do in PyAtomDB.
- Note that the published data table ChiantiPy uses gives distributions up to nuclear charge Z=92, but they arbitrarily cut the downloaded data table at Z=30. So I filled my own copy with the missing data up to Z=92
- Also note that the that analytic fit used by PyAtomDB is independent of nuclear charge and differentiating between H-like and He-like. This is because as Z increases, the difference between the H-like and He-like curves disappears. But as Z increases the distribution (\Psi_0) becomes more strongly peaked at y=0.5
- <img width="1364" alt="Screenshot 2024-02-19 at 12 43 12 PM" src="https://github.com/fsciortino/Aurora/assets/71457911/39dfc675-38f5-4905-a4b4-ae13098b6c4d">
- Minimum working example:

Modules

import aurora.line_broaden as lb import pdb

Fake ADF15 transition wavelength mesh

waveA = np.r[2.72, 2.73]

Fake plasma conditions

Ti_eV = 4e3 mi = 131.29 AA = 1e15

Dictionary of broadening physics

dbrd_V = { 'Voigt':{ 'Ti_eV': Ti_eV, # [eV], ion temperature 'ion_A': mi, # [amu], ion mass

'key_options': 'wavelength', # Transition wavelength

    #'2.72': AA,       # [1/s], fictious Einstein coefficient
    #'2.73': AA,       # [1/s], fictious Einstein coefficient
    'key_options': 'isel',       # ADF15 isel indexing
    '1': AA,        # [1/s], fictious Einstein coefficient
    '2': AA,        # [1/s], fictious Einstein coefficient
    },
'2-photon':{
    #'isel': [1],
    'wavelength': [2.72],    # wavelength of minimum photon wavelength
    'Znuc': 30,                # nuclear charge
    'nele': 1,                   # number of electrons (only H-like or He-like)
    },
}

Calculates profiles

out_V = lb.get_line_broaden( dbroad = dbrd_V, wave_A = wave_A, use_scipy=True, use_pseudo=False, )



- Note that again you can demark the ADF15 transition that's 2-photon by either its wavelength or isel index
- Also note that it is assumed that the physics of this transition is only 2-photon emission as you would expect for the H-like sequence and 1S0 for He-like. 3S1 for He-like more strongly decays by magnetic dipole emission than 2-photon consistently by 1e4 times even versus nuclear charge so it is ignored here. 3S1 also has a different trend with nuclear charge than is captured here. Because of this though the 2-photon distribution will override other physics options applied to the flagged transition
- A hanging thread I haven't resolved per my references is that I never see any thermal broadening included. I don't know why and if there should be. Anyone else know? I know actual atomic physicists that I plan on asking.
     - Not a hanging thread anymore. There's a simple scale separation argument. Now 2-photon emission happens via two single-photon emission events mediated by a virtual 2p state (hence 2E1 and such). That means each photon is independently isotropic so each experiences a probability distribution of Doppler shifts on the order of \Delta\omega =k * v_th,i. The 2-photon emission curve is centered around \omega_0/2 so the FWHM is on the order of the photon frequency \Delta\omega=k*c. So as long as you have non-relativistic ions, the Doppler broadening is relatively basically a delta function and can be ignored.
odstrcilt commented 7 months ago

I have fixed the Pseudo-Voigt profile, now it agrees pretty well: image

cjperks7 commented 7 months ago

I have fixed the Pseudo-Voigt profile, now it agrees pretty well:

Oh sick, thanks! What was the issue? From what I understand reading your changes, I was supposed to the Gaussian/Lorentzian parts using the same, combined FWHM?

odstrcilt commented 7 months ago

Exactly, both needs to use the same FWHM called "f" on wikipedia

cjperks7 commented 7 months ago

Okay, let me know if you see anymore problems. If we're good then before merging then I'll make the pseudo-Voigt the default

odstrcilt commented 7 months ago

You don't have to make pseudo-Voigt default. It is just an approximation, which is useful only if you want to evaluate the line shape really fast, for example, for fast synthetics diagnostic.