PyLops / pylops

PyLops – A Linear-Operator Library for Python
https://pylops.readthedocs.io
GNU Lesser General Public License v3.0
431 stars 102 forks source link

Abstraction to `_matvec` and `_rmatvec` #489

Closed rohanbabbar04 closed 1 year ago

rohanbabbar04 commented 1 year ago

For issue #369

mrava87 commented 1 year ago

@rohanbabbar04 maybe you didnt understand my suggestion in the previous PR. Since we have new linear operator class, there is no point to create a BaseLinearOperator, everything should be possible directly with LinearOperator

rohanbabbar04 commented 1 year ago

@mrava87 I don't think we can directly make the LinearOperator the abstract class

        """
        if isinstance(x, (LinearOperator, spLinearOperator)):
            # cast x to pylops linear operator if not already (this is done
            # to allow mixing pylops and scipy operators)
            Opx = aslinearoperator(x)
>           Op = LinearOperator(Op=_ProductLinearOperator(self, Opx))
E           TypeError: Can't instantiate abstract class LinearOperator with abstract methods _matvec, _rmatvec

..\venv\lib\site-packages\pylops\linearoperator.py:574: TypeError
=============================================== short test summary info ===============================================
FAILED test_blending.py::test_Blending_continuous[par0] - TypeError: Can't instantiate abstract class LinearOperator with abstract methods _matvec, _rmatvec
FAILED test_blending.py::test_Blending_group[par0] - TypeError: Can't instantiate abstract class LinearOperator with abstract methods _matvec, _rmatvec
FAILED test_blending.py::test_Blending_half[par0] - TypeError: Can't instantiate abstract class LinearOperator with abstract methods _matvec, _rmatvec
================================================== 3 failed in 7.24s ==================================================
rohanbabbar04 commented 1 year ago

We have instantiated the LinearOperator class so I think the best possible solution is to make a BaseLinearOperator

mrava87 commented 1 year ago

What if instead of Op = LinearOperator(Op=_ProductLinearOperator(self, Opx)) you simply had Op = _ProductLinearOperator(self, Opx) like for the _SumLinearOperator case?

rohanbabbar04 commented 1 year ago

Got a new error


par = {'dt': 0.3, 'dtype': 'complex128', 'imag': 1j, 'nt': 21, ...}

    @pytest.mark.parametrize(
        "par", [(par1), (par2), (par3), (par4), (par1j), (par2j), (par3j), (par4j)]
    )
    def test_CausalIntegration2d(par):
        """Dot-test and inversion for CausalIntegration operator for 2d signals"""
        dt = 0.2 * par["dt"]  # need lower frequency in sinusoids for stability
        t = np.arange(par["nt"]) * dt
        x = np.outer(np.sin(t), np.ones(par["nx"])) + par["imag"] * np.outer(
            np.sin(t), np.ones(par["nx"])
        )

        for kind, rf in itertools.product(("full", "half", "trapezoidal"), (False, True)):
            Cop = CausalIntegration(
                (par["nt"], par["nx"]),
                sampling=dt,
                axis=0,
                kind=kind,
                removefirst=rf,
                dtype=par["dtype"],
            )
            rf1 = 1 if rf else 0
            assert dottest(
                Cop,
                (par["nt"] - rf1) * par["nx"],
                par["nt"] * par["nx"],
                complexflag=0 if par["imag"] == 0 else 3,
            )

            # test analytical integration and derivative inversion only for
            # cases where a zero c is required
            if kind != "full" and not rf:
                # numerical integration
                y = Cop * x.ravel()
                y = y.reshape(par["nt"], par["nx"])

                # analytical integration
                yana = (
                    np.outer(-np.cos(t), np.ones(par["nx"]))
                    + np.cos(t[0])
                    + par["imag"]
                    * (np.outer(-np.cos(t), np.ones(par["nx"])) + np.cos(t[0]))
                )
                yana = yana.reshape(par["nt"], par["nx"])

                assert_array_almost_equal(y, yana, decimal=2)

                # numerical derivative
>               Dop = FirstDerivative(
                    (par["nt"], par["nx"]), axis=0, sampling=dt, dtype=par["dtype"]
                )
E               TypeError: Can't instantiate abstract class FirstDerivative with abstract methods _matvec, _rmatvec

test_causalintegration.py:165: TypeError
rohanbabbar04 commented 1 year ago

The FirstDerivative, TorchOperator, and SecondDerivative don't have _matvec and _rmatvec methods

mrava87 commented 1 year ago

Ok, we need to investigate this deeper... the approach you suggest introduces too many changes.

The worst case scenario (aka we can't make LinearOperator be ABC class), then we should swap the role of BaseLinearOperator and LinearOperator so that we have very few changes to the codebase for the same outcome

mrava87 commented 1 year ago

The FirstDerivative, TorchOperator, and SecondDerivative don't have _matvec and _rmatvec methods

Oh I see why... for FirstDerivative and SecondDerivative it is because we use _register_multiplications, this can change if needed (though we create methods inside - maybe something ABC doesn't like).

For TorchOperator, I think we should reconsider it and simply not subclass it... but I need to look into it.

mrava87 commented 1 year ago

So, excluding those 3, the approach I suggested would work?

rohanbabbar04 commented 1 year ago

No here as well cls_sparsity.py

 # prepare decay (if not passed)
        if perc is None and decay is None:
            self.decay = self.ncp.ones(niter)

        # step size
        if alpha is not None:
            self.alpha = alpha
        elif not hasattr(self, "alpha"):
            if not isinstance(self.Op, LinearOperator):
>              self.Op = LinearOperator(self.Op, explicit=False)
            # compute largest eigenvalues of Op^H * Op
>            Op1 = LinearOperator(self.Op.H * self.Op, explicit=False)
            if get_module_name(self.ncp) == "numpy":
                maxeig: float = np.abs(
                    Op1.eigs(
                        neigs=1,
                        symmetric=True,
                        **self.eigsdict,
                    )[0]
                )
mrava87 commented 1 year ago

Ok, then I need to look into it myself.. this one I dont understand because LinearOperator as such has those methods...

mrava87 commented 1 year ago

As I said, I am not in favour of making a BaseLinearOperator class... I think this is becoming too messy

Try to see if we can avoid it and what would that entails - if its just changing the 3 methods you mentioned that may be a easier approach :)

rohanbabbar04 commented 1 year ago

Ok I'll look into this and get back to you :)

rohanbabbar04 commented 1 year ago

@mrava87 I have updated the code according to the above requirements and also updated the description of this PR. I also made some changes based on what I was implementing. Do take a look and give your opinion :)

mrava87 commented 1 year ago

Great! This looks much better at first sight, I'll get onto it soon and give you a more detailed feedback :)

mrava87 commented 1 year ago

@rohanbabbar04 I went through your code and gave you detailed comments above. I still need to look at 'torchoperator' and then do a final check that everything works as expected :)

rohanbabbar04 commented 1 year ago

Thank you for reviewing my PR @mrava87, I have updated the torchoperator.py after taking a look at the updates in the FirstDerivative and SecondDerivative classes. The docs currently fail as explained over here. docs failed

mrava87 commented 1 year ago

@cako this is ready for you to review. It was a bit bumpier that I have initially foreseen after making LinearOperator independent of scipy but I think we managed to find the least intrusive solution that still gives us the benefit of having ABC and abstract methods.

mrava87 commented 1 year ago

@cako let's try to get this one in before https://github.com/PyLops/pylops/pull/496

mrava87 commented 1 year ago

Mmh not sure, what would the benefit be?

Eventually all the meat is in _TorchOperator, TorchOperator just stores some variables and calls the other operator method(s). I think inheriting will require to implement the forward method at least, but I need to check to be sure :)

cako commented 1 year ago

Ok, sounds good! I'll leave it with you to merge then when you are done investigating the TorchOpertator inheritance

mrava87 commented 1 year ago

I did a bit of experimentation and could not find a real benefit from making TorchOperator a child of torch.autograd.Function. For now I prefer not to do so, we can revisit in future.