Open Didou09 opened 2 years ago
thanks @Didou09 for this nice overview! While I play with these methods, two first questions come to mind:
How can I define a 1D LOS simply based on start and end points? It seems that create_CamLOS1D
allows for LOS definitions using a pinhole and 3 angles. Is that what you would recommend one to use in this case?
Obvious follow-up: once I define LOS geometries and pre-computed the geometry matrix, how can I provide some measurements and run the tomography?
It isn't so clear to me what is being plotted in the last figure above (obtained by running mesh.plot_geometry_matrix()
). I'm looking more into it now, but it would be helpful if you could clarify here too, please.
@fsciortino , can we discuss now ?
In a nutshell:
# start_pts = (3, N) array of floats containing the (X, Y, Z) coordinates of points
# end_pts = (3, N) array of floats containing the (X, Y, Z) coordinates of points
# Derive the unit vectors for each LOS
vect = end_pts - start_pts
vect = vect / sqrt(np.sum(vect**2, axis=0))
# Provide start_pts and vect as a tuple under argument dgeom (dictionary of geometric paramaters) to CamLOS1D
# Give it a Name, Diag, Exp
# Don't forget to give the configuration too
conf = tf.load_config('AUG')
cam = tf.geom.CamLOS1D(dgeom=(start_pts, vect), config=conf, Name='Cam1', Diag='SXR', Exp='AUG')
@Didou09 I'm running into the problem that my LoS are defined as starting from a point below the divertor structure and therefore the mesh.add_mesh(crop_poly=conf, res=0.1, deg=1, key='mesh1')
crops the LoS in the wrong way -- see the image below.
I need to correct the cropping both for the incoming and outgoing part of each ray. I could try to crop this myself, but there's probably already something in ToFu to deal with this properly...?
There are at least 2 ways to do this:
1/ Once the camera is created, it holds the tokamak configuration as an attribute, you can modify that attribute to tell the camera that it should ignore one particular structure element when it computes the ray-tracing, and force the ray-tracing to be re-computed
Load the configuration and create a camera (either a pinhole camera using tf.geom.utils.create_CamLOS1D() or directly by specifying start_point and vect using tf.geom.CamLOS1D(), both will be united in a single routine in future versions):
In [1]: conf = tf.load_config('AUG')
In [2]: config.set_colors_random() # assign random colors to each element to spot easily the name of the one you're interested in
n [3]: config.plot()
Create the camera, observe that it is not as desired, then deactivate the PFC named D2cdomR
in the compuitation, and re-run the ray-tracing computation:
In [4]: cam = tf.geom.utils.create_CamLOS1D(pinhole=[1.48, 0, -1.15], orientation=[0,0,0], sensor_nb=10, sensor_size=0.05, focal=0.1, config=conf, Name='test', Diag='test', Exp='AUG')
In [5]: cam.plot()
In [6]: cam.config # config is now an attribute of cam, see what's inside, we want to set compute=False for D2cdomR
In [7]: cam.config.PFC.D2cdomR.set_compute(False)
In [7]: cam.config.PFC.D2cdomR.set_compute(False)
In [8]: cam.compute_dgeom() # re-compute dict of geometry parameters
In [9]: cam.plot() # double check
Then to compute a synthetic signal from an arbitrary emissivity function:
1/ define an emissivity function, it should take at least pts
(3, N) X, Y, Z point coordinates array as argument, and optionally t
(time vector).
t
is not provided, the function shall return a (N,) array (one value per pts)t
is provided, it shall return a (nt. N) array
2/ Feed it to cam.calc_signal()
, along with the desired spatial resolution res
(m) for LOS discretization, and optionally a time vector
3/ It will give the synthetic signal as a sig
array (nt, nchan), a units
str, and optionally plot the result in an interactive figureIn [57]: def emiss(pts, t=None):
...: R = np.hypot(pts[0, :], pts[1, :])
...: Z = pts[2, :]
...: e0 = np.exp(-(R-1.575)**2/0.1**2 - (Z+1.15)**2/0.01**2)
...: if t is not None:
...: e0 = e0[None, :] * (1. + 0.1*np.cos(t[:, None]))
...: return e0
...:
n [58]: sig, unit = cam.calc_signal(emiss, res=0.001, t=np.linspace(0,10,100))
(clic on axes to select time / channel)
@Didou09 thanks for these extra steps. This all works great! So, I suppose that the final steps to do a tomography are:
How can I do these final steps? Also, any advice on how to assess the quality of the tomographic reconstruction? (e.g. standard plots to look at a \chi^2
, or plots that compare residua, etc.)
Thanks!
Hi @fsciortino,
For your specific problem, I've made a few minor upgrade to tofu
to make your life a bit easier.
They are already available on the devel
branch and will be officially release in the next version (1.5.1
).
pip
/ conda
install and are in no hurry, wait for the release and upgrade when available (probably next week)In both cases, you can now:
'AUG-V0'
), boiling down to a single closed polygon representing the first wall.
This saves you the need to de-activate some structural elements in the ray-tracing computationdomain
and a polygon for cropping (you had to choose between these 2 options before PR #606)To load a simple version of AUG and create a camera:
In [1]: import tofu as tf
In [2]: conf = tf.load_config('AUG-V0')
In [3]: cam0 = tf.geom.utils.create_CamLOS1D(pinhole=[1.48, 0, -1.15], orientation=[0,0,0], sensor_nb=10, sensor_size=0.01, focal=0.02, config=conf, Name='test', Diag='test', Exp='AUG')
In [4]: cam2 = tf.geom.utils.create_CamLOS1D(pinhole=[1.46, 0, -1.10], orientation=[0,0,0], sensor_nb=10, sensor_size=0.01, focal=0.02, config=conf, Name='test', Diag='test', Exp='AUG')
In [5]: cam = cam0 + cam1 + cam2
In [6]: cam.plot()
Then create a mesh with the relevant limits and desired bsplines degree, then compte the geometry matrix using the concatenated camera and plot it, choosing a basis function and channel indew such that you can better understand the plots (and add the AUG geometry on top):
In [7]: mesh = tf.data.Mesh2D()
In [8]: mesh.add_mesh(domain=[[1.45, 1.65], [-1.22, -1.05]], crop_poly=conf, res=0.008, deg=0, key='mesh1')
In [9]: mesh.add_geometry_matrix(cam=cam, res=0.001)
/Home/DV226270/ToFu_All/tofu_git/tofu/tofu/data/_mesh_comp.py:193: UserWarning: ind is not 2d anymore!
warnings.warn("ind is not 2d anymore!")
Geom. matrix, chan 30 / 30
In [10]: dax = mesh.plot_geometry_matrix(indbf=50, indchan=15, cam=cam)
/Home/DV226270/ToFu_All/tofu_git/tofu/tofu/data/_mesh_comp.py:193: UserWarning: ind is not 2d anymore!
warnings.warn("ind is not 2d anymore!")
In [11]: conf.plot(proj='cross', lax=dax['crosstot']['ax'])
Out[11]: <AxesSubplot:xlabel='Z (m)', ylabel='R (m)'>
Then define a custom emissivity function, use it to compute synthetic signal:
In [12]: def emiss(pts, t=None):
...: R = np.hypot(pts[0, :], pts[1, :])
...: Z = pts[2, :]
...: e0 = np.exp(-(R-1.575)**2/0.1**2 - (Z+1.15)**2/0.01**2)
...: if t is not None:
...: e0 = e0[None, :] * (1. + 0.1*np.cos(t[:, None]))
...: return e0
In [13]: time = np.linspace(0, 10, 11)
In [14]: sig, unit = cam.calc_signal(emiss, res=0.001, t=time)
Now you can compute an inversion.
An inversion requires several intermediates (like the regularity operator...). As a preliminary to computing the inversion, we need to store the time vector we used and the synthetic data, it takes 2 extra lines, using a tofu-specific syntax:
In [15]: mesh.add_ref(key='time', data=time, group='time')
In [16]: mesh.add_data(key='synthetic data', data=sig.data, ref=('time', 'chan0'))
You can then check that mesh indeed contains the time vector and data array, which means we will now be able to access them by key:
You can now add an inversion, using this data, to the mesh instance:
In [16]: mesh.add_inversion(key_data='synthetic data', sigma=None, operator='D1N2', verb=1)
time step 1 / 11 chi2n = 5.417e-01 niter = 7
time step 2 / 11 chi2n = 5.031e-01 niter = 3
time step 3 / 11 chi2n = 4.270e-01 niter = 4
time step 4 / 11 chi2n = 3.842e-01 niter = 4
time step 5 / 11 chi2n = 4.090e-01 niter = 4
time step 6 / 11 chi2n = 4.821e-01 niter = 4
time step 7 / 11 chi2n = 5.384e-01 niter = 3
time step 8 / 11 chi2n = 5.209e-01 niter = 3
time step 9 / 11 chi2n = 4.480e-01 niter = 4
time step 10 / 11 chi2n = 3.899e-01 niter = 4
time step 11 / 11 chi2n = 3.952e-01 niter = 3
0.17695121094584465 s
Post-formatting results...
And plot the result (select the time step to plot using indt=...
):
In [79]: mesh.plot_inversion(indt=0)
On the right hand side of the plot:
D1N2
= minimize the squared norm of the gradient this table for a larger choice)conv_crit=1.e-4
, but you can change it)They are other parameters you can tweak, but these are probably the most important points.
@Didou09 I've made some progress using some synthetic data from SOLPS simulations, but the result seems wrong. In the figure below, I'm plotting the reconstruction on the left and the original signal on the right. I've overplotted the 2D simulation grid on top of the tomography result just for ease of comparison.
I've used bsplines of order 0 to start with. When trying to use order 1, the inversion gave me an assertion error that I couldn't decipher...
I set up the problem as time-independent and checked that the lines of sights as given to tofu are correctly representing the way I created synthetic line-integrated data.
Interpreting the figure that comes up when doing
dax = mesh.plot_geometry_matrix(indbf=50, indchan=15, cam=cam)
doesn't seem trivial. When running this command (with this choice of indbf
and indchan
), I get the following:
What exactly am I looking at here? Do you see anything wrong? Thanks for your help!
Hi @fsciortino , can you also include a plot of all your lines of sight ? (just do cam.plot()
and copy-paste the figure, si I get an idea of the global geometry)
Also, can you copy-paste the whole figure produced by plot_inversion()
?
plot_geometry_matrix()
shows that your grid probably too thin with respect to your geometrical coverage (i.e.: you have a few pixels which are blank in the middle of the mesh.
This is bad for the resolution.
Here is the result of cam.plot()
, zoomed into the lower divertor:
(FYI, we don't actually have all these views; this is just an idealized test)
... and the result of mesh.plot_inversion
:
Thanks, so apparently the inversion gives measurements that are a pretty good fit to the synthetic data (apart from a few isolated channels). If course you can guess that it is not satisfactory, this is an illustration of the infinity of solutions to inverse problems.
My guess to improve the results:
Also, which operator did you choose ?
OK, changing the mesh resolution to 5cm now gives this (with bsplines order of 1, which now worked fine -- I may have done something silly previously):
Does the order of magnitude of the data that I give to mesh.add_data
matter for convergence?
I'm using the D1N2 operator.
How did you compute the synthetic signal that you use as input for inversion ?
Just to clarify: do you compute the synthetic signal using the SOLPS simulation of the whole cross-section and then try to run an inversion on a small fraction only of that cross-section ? If this is what you are trying to do, it cannot work fine, because the solution you're looking for cannot account for the whole signal (i.e.: for the part of the signal that comes from other parts of the cross-section)
Essentially, yes. I've been computing the synthetic signals from SOLPS on the whole cross-section. I'm trying to reconstruct the lower divertor signal under the assumption that the signal measured in the upper SOL is negligible... but I will try again dropping this assumption and only using LOS that only go through the lower divertor. I'll get back to you with those results!
Yes, try that, it should give better results
Be conservative for a first test: only keep the LOS that do not get anything from outside of the region you're meshing. Then, if the result is satisfactory, You'll try to gradually add carefully chosen LOS
With this set of views
I'm now getting this
I'll check again my line-integration routine for synthetic data -- I wouldn't expect it to be a problem, but since I don't seem to get good results, it seems worth another check.
On the second figure, there are 3 LOS that show a clear error around channel indices ~17
, ~55
, ~101
.
explore the geometry of these LOS to understand why: are they edge channels ?
More generally, explore the channels with the largest errors to understand how to better match your grid and basis functions with your geometry, and to identify issues.
We can arrange a discussion on Monday if you want.
Hi @fsciortino, hopefully this brief tutorial will help you get started fast:
In tofu, a tokamak geometry is called a Configuration (
Config
class), to account for the fact that a tokamak with the same name can exist in various configurations (with a limiter, then with a divertor, then with a new antenna...).To create arbitrary cameras on AUG, start by loading the AUG geometry using the
tf.load_config()
routine.You can get a list of the available tokamak geometries and their different versions by typing using the routine without giving arguments, which will raise an error printing the list of choices to pick from:
Pick a low-version (simpler) if you want faster computations, a high-version if you want details. The routine returns a Config instance, just type it and press enter to get a list of its content (in terms of structures, nb of points in each (R, Z) polygon, number of toroidal occurences (0 => axisymmetric)...
You can add random colors to cheer it up:
You then easily create a
Mesh2D
instance based on this geometry, first create the instance (a generic data holder class that can handle several meshes), then add a rectangular mesh using the configuration as a cropping limit, with resolution 10 cm, and also add bsplines of degree 1 to it:Again, type the variable and pressing enter will tell what's inside, and there is a plotting method (that will return a dict of axes, this is currently being refactored for harmonization)
You can also plot any individual 2D bspline providing its index in the grid:
Now you can add two 1d pinhole cameras as two set of lines of sight using the geometry utility:
Feel free to plot in astatic or interactive figure each camera, or to concatenate them into a single total camera: The
plot_touch()
method provides a left-clickable interactive figure to let you select and LOS and get it highlighted in the cross-section and horizontal plot. By default, the plot shows the length of the LOS, and the color of each LOS is inherited from the structural element on which it ends. You can also plot the incidence angle of the LOS on sais structural element if you want, usingquant='angles'
You can then decide to compute the geometry matrix of the mesh you created using the total camera you created too (add the geometry matrix to the generic data handler mesh), with a spatial resolution of mm for the integrals. Then plot to explore the content of the geometry matrix