Open alex-koe opened 3 years ago
Sorry for bothering you again. With the reduced particles mentioned in ComputationalRadiationPhysics/particle_reduction#21 I now want to put them into a gdf file. I used
python openPMD-converter-GDF/openPMD_to_gdf.py -openPMD_input simData_run006_filtered_for_GPT_60000.h5 -gdf out2.gdf
. It produces a file. H owever, is there a chance that there is the same issue with the particle momentum as in Issue 21?
I'm not sure, but I will check momentum correctness
Sorry for bothering you again. With the reduced particles mentioned in ComputationalRadiationPhysics/particle_reduction#21 I now want to put them into a gdf file. I used
python openPMD-converter-GDF/openPMD_to_gdf.py -openPMD_input simData_run006_filtered_for_GPT_60000.h5 -gdf out2.gdf
. It produces a file. H owever, is there a chance that there is the same issue with the particle momentum as in Issue 21?
I think, I reproduced the bug. It's not the same, as in reduction. I will fix it today. But to be pretty sure: in GPT stored "macroweighted" momentum(multiply on weights) or not? According to the GPT documentation, they stored momentum for "single real particle".
@KseniaBastrakova I am not sure, if i understood the way GPT is taking the data. Typically, the data that is used by GPT as input, that looks like that (converted to ASCII):
Bx By Bz x y z m q nmacro rmacro
1.486e-17 6.334e-15 8.284e-17 6.669e-05 1.278e-03 6.659e-05 9.109e-31 -1.602e-19 1.534e+05 1.000e+00
-1.927e-16 7.137e-15 -4.044e-17 6.683e-05 1.278e-03 6.658e-05 9.109e-31 -1.602e-19 1.534e+05 1.000e+00
6.449e-17 6.493e-15 1.076e-16 6.744e-05 1.278e-03 6.653e-05 9.109e-31 -1.602e-19 1.534e+05 1.000e+00
where Bx stands for the relativistic Lorentz factor ß_x, By for ß_y a.s.o. That all betas are in the order of 1e-15 and thus close to zero surprises me for relativsitic electrons. As one can see in the right columns, the particles have the mass and charge of an electron but represent about 1e5 macro particles. So i would assume that you are correctly saying that it stores particles representing nmacro
particles.
After discussion with @BeyondEspresso i wrote a little scriplet as suggested. The script reads reduced h5-files, e.g. files that contain about 5000 macro-particle. Then, it outputs a ASCII-File compatible for GPT. The file contains all three positions, beta and gamma as well as macro-particle weighting (nmacro).
###### init ####
import numpy as np
import scipy as sp
import h5py
import scipy.constants as const
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
##### local imports
#### import simple definitions for getting fields and particles easily from h5 files
from get_fields_and_particles import *
from settings import *
import sys
timestep =int(sys.argv[1]) # read this variable from arguments (1st argument)
filename = "reduced_data{}ts.h5".format(timestep)
#species = 'en_all'
simulation_box_size = [1.36089598e-04, 8.93087984e-05, 1.36089598e-04]
outfilename = "reduced-{}ts.txt".format(timestep)
## some formatting plot options
fs = 15
plt.rcParams.update({'font.size': fs})
plt.rcParams['xtick.direction'] = 'in'
plt.rcParams['ytick.direction'] = 'in'
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['figure.facecolor'] = 'white'
f = h5py.File(filename, "r")
ux = load_momentum("x", timestep, f, species=species)
uy = load_momentum("y", timestep, f, species=species)
uz = load_momentum("z", timestep, f, species=species)
x = load_position("x", timestep, f, species=species) - simulation_box_size[0]*0.5e6
y = load_position("y", timestep, f, species=species)
z = load_position("z", timestep, f, species=species) - simulation_box_size[2]*0.5e6
w = load_weighting(timestep, f, species=species)
## compile data for GPT
G = np.sqrt(1.0 + ux**2.0 + uy**2.0 + uz**2.0)
Bx = ux/G
By = uy/G
Bz = uz/G
y -= np.average(y) # set beam center to y=0
### the data array contains the data in the order for GPT, ie. beam moves in Y[PIC] == Z[GPT]
### for this reason, y and z are flipped
data_array = np.array([1e-6*x,1e-6*z,1e-6*y, Bx, Bz, By, G, w])
header = "x y z Bx By Bz G nmacro"
labels = "x", "y", "z", "Bx", "By", "Bz", "G", "nmacro"
### ploting data for overview
fig = plt.figure(constrained_layout=False, figsize=(10,10))
spec = fig.add_gridspec(ncols=3, nrows=3)
spec.update(wspace=0.2, hspace=0.20) # set the spacing between axes.
for i, ii in enumerate(data_array):
fig.add_subplot(spec[i//3, i%3])
plt.hist(ii, bins=50, alpha=0.8)
plt.ylabel("#")
plt.text(np.mean(ii),0, "RMS {:.1e}".format(np.std(ii)), horizontalalignment='center')
plt.title(labels[i])
#plt.show()
plt.savefig(outfilename+"-overview.png")
plt.hist2d(y, G*0.511, bins=100)
plt.xlabel("z (PIC)[µm]")
plt.ylabel("Energy [MeV/c²]")
plt.savefig(outfilename+"-longitudinal-ps.png")
### Saving data as TXT
np.savetxt(fname=outfilename, X=data_array.T, header=header, comments='')
The txt-file can be converted by the following command to GDF-input files of GPT:
asci2gdf -o pic-distribution-${Q}nC.gdf reduced-${TIMESTEP}.txt nmacro $(echo "scale=4;${Q} / ${Q_TXT_FILE}" | bc)
The bc
command scales the charge from the charge in the TXT file to the desired value for charge scans.
Sorry for bothering you again. With the reduced particles mentioned in https://github.com/ComputationalRadiationPhysics/particle_reduction/issues/21 I now want to put them into a gdf file. I used
python openPMD-converter-GDF/openPMD_to_gdf.py -openPMD_input simData_run006_filtered_for_GPT_60000.h5 -gdf out2.gdf
. It produces a file. H owever, is there a chance that there is the same issue with the particle momentum as in Issue 21?