Open tellefs opened 3 years ago
To add to this, one should be able to use a more informed prior than the DE results if available.
Thanks for the comment. I started to explain to Tellef how I would approach it and asked him to raise/document the issue on github for easier access.
The current layout of the Normalizer classes makes it a bit more complicated than necessary, as one has to redefine the whole optimize
method, instead of being able to change just the prior
. It's easy enough with copy and paste, but could be more beautiful. [I knew this when coding, but was too lazy back then to change it :/ ].
Changing inner functions is rather obscure, so either one has to copy/past the whole optimize
method to a child class and (just) change the prior there. Or we would refactor the class, such that the prior is more easily available.
@vetlewi: Do you have a child class where you inherited from one of the Normalizes? If making changes, I don't want to break existing code of yours. [It's anyhow questionable whether I manage this before my defense in March.]
Btw, here is a bit more of the problem. Multinest finds several modes, of which the most likely mode (very small alpha) is "bad". It is so obviously "bad" that I proposed to Tellef to just use a harder constrain on the prior then my default.
Edit:
python /path/to/PyMultinest/multinest_marginals.py path/to/results/multinest/sim_norm_0_
Posterior distribution of the normalization parameters:
Thanks for the comment. I started to explain to Tellef how I would approach it and asked him to raise/document the issue on github for easier access.
The current layout of the Normalizer classes makes it a bit more complicated than necessary, as one has to redefine the whole
optimize
method, instead of being able to change just theprior
. It's easy enough with copy and paste, but could be more beautiful. [I knew this when coding, but was too lazy back then to change it :/ ].Changing inner functions is rather obscure, so either one has to copy/past the whole
optimize
method to a child class and (just) change the prior there. Or we would refactor the class, such that the prior is more easily available.@vetlewi: Do you have a child class where you inherited from one of the Normalizes? If making changes, I don't want to break existing code of yours. [It's anyhow questionable whether I manage this before my defense in March.]
My code re-implements the normalizer (mostly copy-paste :p). Wont be breaking any of my code :)
Ok, great. I'm looking forward to seeing it your results, you've done a lot of good additions. Please consider writing them up as a small follow up paper (I think they call it new version announcement ?) :+1: ! Tell me if I can help you with that...
I'll see about the priorities one I'm done. Otherwise, of course @tellefs is welcome to write up a PR with the refactored code.
At the moment I dont know enough about how multinest functions, to make something satisfying. On suggestion from Fabio, I am currently trying to create a "ConstrainedNormalizerSimultan" class, where I will import the "NormalizerSimultan" class, but change "optimize" to be adjusted for my case.
Not sure if this is something i should create a PR for?
Not sure if this is something i should create a PR for?
You're right, that's not something that would/should be pulled back to the main repo here, but exist on your own fork. If you wanted to, you could propose a change to the code where the prior
function would be a method of its own, not nested into optimize
. In that case, users would only need to change the prior method, not the whole optimize
method. (cleaner code)
Here's a similar graph as above but where I would conclude that the sampling worked and only found one mode (note that this was for another dataset, so the values will be different):
For future reference, here are the Ensemble Normalized plots i got. As one can see, the orange median looks "good", but the errors are huge, due to the "wrong" alpha guesses.
As mentioned above, I worked around it by making a Child Class of the Parent Class "NormalizerSimultan", and changed the "prior" function inside the "optimize" function. I made the alpha "cube" to be between 1 and 2. These values were picked by looking at the first plot Fabio posted, where the alpha plot looks gaussian-like at around 1.75.
For future reference, here are the Ensemble Normalized plots i got. As one can see, the orange median looks "good", but the errors are huge, due to the "wrong" alpha guesses.
As mentioned above, I worked around it by making a Child Class of the Parent Class "NormalizerSimultan", and changed the "prior" function inside the "optimize" function. I made the alpha "cube" to be between 1 and 2. These values were picked by looking at the first plot Fabio posted, where the alpha plot looks gaussian-like at around 1.75.
Btw, here is a bit more of the problem. Multinest finds several modes, of which the most likely mode (very small alpha) is "bad". It is so obviously "bad" that I proposed to Tellef to just use a harder constrain on the prior then my default.
Edit:
- This explains easily why a simple analysis of the median + 1 sigma yields unreasonable results.
- Retrieved plot with s.th like
python /path/to/PyMultinest/multinest_marginals.py path/to/results/multinest/sim_norm_0_
Posterior distribution of the normalization parameters:
@tellefs & @fzeiser Have you considered re-normalize the NLD/gSF with some negative alpha value before performing the normalization procedure?. This would ensure that the 'physical' solution will be an alpha value should be larger and might be more in line with the current prior. Maybe it cause the "false" mode to go away. One could also try to use a positive alpha which might push the node down to a negative value which is outside the prior range. Example:
for nld, gsf in zip(extractor.nld, extractor.gsf):
nld.transform(alpha=±0.3, inplace=True) # If nld.units == 'keV' use alpha=±0.3/1e3
gsf.transform(alpha=±0.3, inplace=True) # If gsf.units == 'keV' use alpha=±0.3/1e3
I had pre-normalized the NLD and gSF with an positive alpha=1.5, which did not solve the problem. Started a run with alpha=-1.5 instead. This is the Simultan Normalization I get with alpha=-1.5 , seems like the problem is still there.
Have you considered re-normalize the NLD/gSF with some negative alpha value before performing the normalization procedure?. This would ensure that the 'physical' solution will be an alpha value should be larger and might be more in line with the current prior. Maybe it cause the "false" mode to go away. One could also try to use a positive alpha which might push the node down to a negative value which is outside the prior range.
Started a run with alpha=-1.5 instead. This is the Simultan Normalization I get with alpha=-1.5 , seems like the problem is still there.
This is a bit funky, because I would have assumed that alpha=-1.0 or -1.5 transformation before running the normalizer should cut off the lower mode. Are you sure you didn't have some "hidden states" in the notebook? How does the marginals plot look like?
-- In the end, I think it is easier to understand if you just change the prior, but now that we observe the problem, it's be interesting to see...
This is a bit funky, because I would have assumed that alpha=-1.0 or -1.5 transformation before running the normalizer should cut off the lower mode. Are you sure you didn't have some "hidden states" in the notebook? How does the marginals plot look like?
Marginals plot included above. Also, i restarted the kernel before running, so there should be no hidden states in the notebook?
Marginals plot included above. Also, i restarted the kernel before running, so there should be no hidden states in the notebook?
Marginals plot included above. Also, i restarted the kernel before running, so there should be no hidden states in the notebook?
- what is the DE best fit?
- what is you prior there?
For these results I did not change the prior.
DE results printed below:
2021-02-05 11:58:08,314 - ompy.normalizer_nld - INFO - DE results: ┌───────────────────┬───────────────────┬─────────────────────┬─────────────────────┐ │ A │ α [MeV⁻¹] │ T [MeV] │ Eshift [MeV] │ ╞═══════════════════╪═══════════════════╪═════════════════════╪═════════════════════╡ │ 5.822245965248761 │ 2.145463321959971 │ 0.39931582019337847 │ -0.7148065580675619 │ └───────────────────┴───────────────────┴─────────────────────┴─────────────────────┘ 2021-02-05 11:58:08,443 - ompy.normalizer_gsf - INFO - Normalizing #0 2021-02-05 11:58:08,453 - ompy.normalizer_simultan - INFO - DE results/initial guess: ┌───────────────────┬───────────────────┬─────────────────────┬─────────────────────┬───────────────────┐ │ A │ α [MeV⁻¹] │ T [MeV] │ Eshift [MeV] │ B │ ╞═══════════════════╪═══════════════════╪═════════════════════╪═════════════════════╪═══════════════════╡ │ 5.822245965248761 │ 2.145463321959971 │ 0.39931582019337847 │ -0.7148065580675619 │ 8.991709742171407 │ └───────────────────┴───────────────────┴─────────────────────┴─────────────────────┴───────────────────┘
With the DE results, one can constrain the normalizing parameters for by e.g. om.NormalizerNLD.bounds['T'] = ...
It would be preferable to be able to do this with the multinest normalization parameters too.