Closed Jammy2211 closed 3 years ago
And, you say that using the correct A matrix within the PyLops inversion speeds things up? But this is not the case when using an approx A?
Exactly, using the correct A matrix gives over a x100 speed up. Absolutely insane lol. As you say, I think it is because the approx doesn't capture the (fairly significant) off-diagonal terms.
We have an issue with noise I'll quickly ask about. In equation 12 a few posts above, you'll note we have sigma noise terms on both the noise in our real and imaginary components. I originally implemented the noise terms as follows:
Wop = pylops.Diagonal(1.0 / obj.as_complex.ravel())
reconstruction = pylops.NormalEquationsInversion(
Op=Op,
Regs=None,
epsNRs=[1.0],
NRegs=[Rop],
data=visibilities.as_complex,
Weight=noise_map.Wop,
tol=settings.tolerance,
**dict(M=Mop, maxiter=settings.maxiter, callback=callback),
)
However, we found this gave incorrect solutions (they were completely unphysical, I have no idea what was going on but they were badddd). This surprised me, as the Wop.todense method gave us the matrix we were expecting:
noise_map = aa.VisibilitiesNoiseMap.manual_1d(
visibilities=[1.0 + 1.0j, 2.0 +2.0j]
)
Wop.todense() == np.array([[1.0 - 1.0j, 0.0 + 0.0j], [0.0 + 0.0j, 0.5 - 0.5j]])
A solution which fixes this is to not use complex entries for the Wop operator:
obj.Wop = pylops.Diagonal(1.0 / np.real(obj).ravel(), dtype="real64")
Wop.todense() == np.array([[1.0, 0.0], [0.0, 0.5]]), 1.0e-4
Alternative, we can achieve equivalent behaviour by using a complex matrix with 0.0's in the j entries, e.g.:
Wop.todense() == np.array([[1.0+0.0j, 0.0+0.0j], [0.0+0.0j, 0.5+0.0j]]), 1.0e-4
I'm interpreating this to mean that when we give non-zero weights to the imaginary components in our data, we are getting solutions which when plotted as the reconstructed values in real-space are not correct (they are not consistent with our simulated data and give extremely high chi-squared values). We have the np.real() on the adjoint method, so the solution returned by PyLops is only real entries.
Am I misunderstanding what it means to use complex entries in the weight operator, and that by setting up the weights as real values they are applied to both the real and imaginary parts during the fit? The alternative is that the fits only work if the weights on the complex entries are zero which... seems hard to reconcile with a lot of the other tests we're seeing.
@Jammy2211, will get back to you soon. In the meantime take a look at this https://github.com/PyLops/pylops/issues/202 (not sure why I was not able to tag you there).
I have made a short notebook to try to prove (myself) that the two problems are identical and also used to experiment a bit with how we could offer in PyLops (linear operators word) the same possibility to solve a partially complex problem as two set of equations (real and imag part)...
I guess you guys work together (?)
Yep, @Sketos is a postdoc in our group, now we're closing in on a final solution we're both working on the problem trying to iron out the final issues :).
And thank you so much for the notebook, we'll check it out today. We've spent the past month trying to piece together where the PyLops implementation differs from the matrix one so this should hopefully fill in the gaps. And now we know we're going to need to use the matrix solution to precondtion the PyLops solution its extra important we get this right!
Any means by which you can adapt PyLops for our problem (especially from an efficiency standpoint) would be amazing, we must have a pretty specific use-case once you start to dig into the detail!
Sounds good :)
I agree your problem is quite specific, although I think this is not that uncommon especially when it comes to tricks for speeding up the solution (e.g. precond) as they all become more problem specific.
As I said in the other issue, if people (you in this case) were to really decide to go for the formulation with real and imag equations instead of a single one with complex numbers, PyLops can be adapted to be more efficient and reduce the number of evaluations of the forward/adjoint (this is already ok for the complex formulation as I think you use it now)... the notebook has some early attempt to see how we could do it to be both efficient and user-friendly, in between me and @cako we will play a bit around and keep you guys posted in the other issue thread :)
Hi, just an update, we now have the PyLops Interferometer code released and working exactly as expected. We are still messing around with preconditioning, but really getting into the details of the implementation now.
Over the next 6 months I intend to implement PyLops for the (Simpler) CCD imaging analysis, which should still stand to gain a lot in performance by adopting PyLops. We already have template scripts up and running to do this, so I shouldn't need to waste your time with my silly questions from now on. If we have some grant applications go as planned, we will probably also be adopting the GPU implemention of PyLops this year too (and we can have a fun time thinking about how we adapt the NUFFT operator for this!).
In terms of publishing, the PyAutoLens JOSS review looks like its wrapping up and I reckon we'll have it published in the next few weeks. However, the current plan is to put it on the arxiv and promote things properly in early March (so we can push a couple of final features through). I think there are Astornomers who will be very interested in PyLops so hopefully this can send some traffic your way, I'll give you a heads up a day or so before PyAutoLens goes on the arxiv :).
Thank you again for all your amazing support, I'll (finally) close this issue, but I'm sure I'll be back to reopen it one day!
Hi @Jammy2211, This is exciting and thanks for letting us know!
Apologies for being a bit slow at replying to the other GitHub issue lately, I just relocated to a new country and moved back to academia and had hard time keeping up with things ;) but now I should have even more time to devote to PyLops and help when you start working on the new exciting steps ;)
And of course, we really appreciate any promotion to PyLops in your community!
Using the SciPy lstsq method, I can compute a solution to the problem Ax = b:
However, I have been unable to reproduce this calculation in PyLops. All of the tutorials I've found deal with scenarions where the matrix A is 1D, or it has the same dimensions as the data. Copious trial and error has so far filed to get me out of a dimension mismatch. Here is the sort of solution I'm trying:
I have no doubt I'm being very stupid, but if you could give me an example to get my simple problem up and running (or explains what I'm conceptually not grasping). I reckon I'll be able to make progress with PyLops from here.