Closed entaylor closed 1 year ago
I'm not entirely sure why the likelihood decreases, but I can offer a few clues:
Okay, cool; that makes total sense, thank you! So it is worth keeping track of niter, blend.log_likelihood.max(), and blend.log_likelihood[-1] as diagnostics for cases like this. Good to know!
I had the idea of running the deblending optimisation for a long time as a convergence test; i.e.:
blend.fit( 500, e_rel=1e-5 )
What has me baffled is that the loglikelihood peaks at a certain point and then goes down and down and down after that; see plot below:
The good news is that the peak is essentially where the iteration cuts off using e_rel=1e-4, so i think (hope?) all my results are good.
It's a little bit surprising to me, but nonetheless reassuring, that the models at the end of the long iteration look fine to me by eye, despite the apparently terrible logL.
There are some oddities with this particular blend (which is why have been testing on it), including missing data in several images, so maybe that is a contributing factor.
I'm almost 100% sure that this must be an issue with how i am using/understanding scarlet, but i can't fathom what that might be. But even so, i would have thought that the optimiser should not allow logL to go down like this. Is it possibly an error in the logL reporting? But if that, why wouldn't that trigger the e_rel condition? Or is there something like a prior over and above the logL that is considered in the optimiser?
Apologies if this is something really simple, but would be very grateful for an explanation if so!
Just in case it gives some insight, here is one of the images i'm fitting, and the residuals for that image: