Closed IvanNikolic21 closed 1 week ago
I'm currently running optimization on the new full run that also has constrained prior variation so that I can immediately look at that. I'm checking how increasing the number of bins redwards changes results. Not that this doesn't include fwhm correction or adjustment of EWs, so the same optimization won't work with the other issue.
Scott method for the gaussian kde didn't work:
For 7 bins, the inference was previously good for all but one iteration, but now it was worse. Next thing I'm trying is a different activation function.
Still trying to make sure the activation function works. However the value which I'm adding needs to be higher than expected at first.
It seems like smaller kde works much better with 4 correct guesses out of 5:
I'm worried about why a certain iteration works for one run, but not for the other. This leads to questions about convergence. For that reason I'm changing the noise to see what happens.
The above thing was a complete mess, and probably unnecessary. Instead I should play with bandwidth and maybe try out different kernel density estimators. An example is this package: https://kdepy.readthedocs.io/en/latest/bandwidth.html
Couple of notes here:
I'm still worried about convergence. For that matter I've launched a new run that has all of the same things.
I'm still worried about convergence. Two runs that should've been identical show different results, even for integrated flux result where no additional stochasticity is added. I've checked that gaussian kde by itself doesn't contain any stochasticity so I'm checking how is stochasticity added within the framework.
This is the first one:
There are continuous issues with the activation function (never should have messed with it). This possibly could have impacted the constrained prior run. I'm investigating that now.
I'm also trying out exponential kernel for the distribution
First test with the exponential kernel went great!
I'm trying out different cached run now!
Yet another positive result. Exponential kernel works great for the third run as well:
Trying out the last remaining run now.
I'm currently checking how to do cached runs with updated likelihood on optimized bins
Exponential kernel still rocks for new runs:
This is run_1 and now I'll also analyze run_2 in the same way.
I'm satisfied with my likelihood inference and I don't expect any further significant improvements. In case something bad happens, I'll re-open the issue, but for now I'm closing it.
This issue will be used to update on the progress of the likelihood optimization. I'm still not satisfied with he performance of the likelihood calculation so I'm testing all available free parameters. I don't believe there is a bug, though I'll test these things when I make plots.