Closed MohamedNedal closed 1 month ago
The Temperature Response needs to have the same dimensions as the data.
Are you able to provide the full code?
Thanks @PaulJWright I've reshaped the arrays to be:
dn_in2d: (501, 501, 6)
edn_in2d: (501, 501, 6)
trmatrix: (101, 6)
tresp_logt: (101,)
temps: (42,)
and now the code seems working, but it's very slow and it'll take a lot of time. Is that normal?
# Now do the DEM maps calculation
dem2d, edem2d, elogt2d, chisq2d, dn_reg2d = dn2dem_pos(dn_in2d, edn_in2d, trmatrix, tresp_logt, temps)
0%| | 3.00[/2.51k](http://localhost:10001/2.51k) [02:04<21:56:52, 31.5s[/](http://localhost:10001/) x10^2 DEM]
21:56:52
I think that's probably a normal time expectation, but we usually generate the DEM over 18 or so temperature bins evenly spaced in logT
.
You mean this line to be like this:
temps = np.logspace(5.7, 7.6, num=18)
Would that make the code run faster? I haven't done DEM analysis before, still learning about it :)
You mean this line to be like this:
temps = np.logspace(5.7, 7.6, num=18)
Would that make the code run faster? I haven't done DEM analysis before, still learning about it :)
Maybe that is fine. You need to be careful as ~7.6 is pretty unconstrained by AIA response functions. I am not sure what Iain used in his paper, but start there unless you have high-temperature constraints. I don't think this would make it faster though.
I would personally not use the data at such a high resolution, but you will need to be careful about downsampling because of the units of the data and response functions, and how the errors are combined
Regarding your initial query, the tresponse matrix used in calculating the dems is constructed internally to do this it needs raw tempreature response matrices for each filter and a temperature "axis" for these data. i.e. in your case you have a temperature response that spans d logT of 5.0 in 0.05 increments (which is your 101 shape tresp_logt) in 6 channels so you provide a (101,6) shape trmatrix and then you have provided 42 temperature bins for the output temperature axis, demregpy samples that original matrix on to your 42 temperature bins producing a shape (42, 6) matrix which is used to do the inversion.
3000 dem/s is pretty fast but to speed it up:
Less temperature bins is faster; the resolution of that trmatrix as a general rule I use 0.1 spacing in logT space and for AIA the effective temperature range is only about 5.6 to 7.2: 7.6 is not covered. This range which can be covered by ~16 bins at 0.1 spacing or 32 at 0.05. I think performance is about linear with the number of bins as all the matrices grow proportionally.
I do warn you though that providing temperature space that is not constrained by data does allow channels with high temperature response (e.g. aia 94) to force the DEM at these high temperatures to creep up and produce strange shaped dems. The DN counts in reality are coming from cooler material picked up by the same channel which is being constrained by overlapping temperature responses in the middle of your t range.
Since the operation is independent for pixels, the speed is directly proportional to the number of pixels, a full 16 megapixel AIA map is 16 times slower than a 1024x1024 downsampled or super-resolutioned map.
Likewise if you have a region of interest, submapping down to a full resolution but 400x400 pixel submap is extremely fast as at about 3000 dem/s (which is what you are getting) this will only take half a minute whereas the 15 minutes on an average computer is about right for a 4k x 4k aia map.
Great, thank you very much @alasdairwilson and @PaulJWright for these helpful tips! I will run some tests and see how it goes. One final thing I'd like to ask about, in order to use the DEM method on AIA data, one should do the following steps first before carrying on:
m_updated = update_pointing(aia_map)
m_registered = register(m_updated)
m_normalized = m_registered / m_registered.exposure_time
psf = aiapy.psf.psf(m_normalized.wavelength)
aia_map_deconvolved = aiapy.psf.deconvolve(m_normalized, psf=psf)
Then proceed with the DEM analysis. Is that the right sequence of steps? Would it be okay if I dropped the last step of deconvolution?
Update pointing and register is a waste of time in my opinion but someone else might have a differing views on that.
I think it is important to correct degradation as the different channels are degrading at different rates.
So I do maps=[correct_degradation(m)/m.exposure_time for m in maps]
or something similar.
Hello, I'm trying to use real AIA data in the notebook
example_demregpy_aiapxl.ipynb
to replicate the DEM maps of Hannah & Kontar 2013 A&A, but I'm stuck.Here's what I did:
Get the data in the correct format for the DEM code, i.e.
array of data and uncertainty in DN/px/s
dn_in = [] for i in range(len(cor_data)): dn_in.append(cor_data[i]/durs[i])
I get this error:
File /net/maedoc.ap.dias.ie/maedoc/home_cr/mnedal/repos/demreg/python/dn2dem_pos.py:173, in dn2dem_pos(dn_in, edn_in, tresp, tresp_logt, temps, reg_tweak, max_iter, gloci, rgt_fact, dem_norm0, nmu, warn, emd_int, emd_ret, l_emd, non_pos, rscl) 169 #check the tresp has no elements <0 170 #replace any it finds with the mimimum tresp from the same filter 171 for i in np.arange(0,nf): 172 #keep good TR data --> 173 truse[tresp[:,i] > 0]=tresp[tresp[:,i] > 0] 174 #set bad data to the minimum 175 truse[tresp[:,i] <= 0,i]=np.min(tresp[tresp[:,i] > 0],axis=0)[i]
ValueError: shape mismatch: value array of shape (101,6) could not be broadcast to indexing result of shape (101,501)