schellekensv / pycle

PYCLE: a Python Compressive Learning toolbox
MIT License
2 stars 3 forks source link

Gradient formula in CLOMP_CKM class #2

Open lucgiffon opened 3 years ago

lucgiffon commented 3 years ago

Hello,

I have a few other questions/remarks regarding the computation of the gradient in the CLOMP_CKM algorithm, lines 307 and 308 in file compressive_learning.py:

I have a last, super minor, remark: line 304 of compressive_learning.py where you use the np.vdot function. I believe the two arguments are reversed. In order to be consistent with the gradient computation hereafter, I think that you should be using the conjugate of the residual and not the one of the sketch_theta variable (it's hidden in the vdot product, as you certainly know). It doesn't have any effect on the result however, because only the real part is relevant.

Thank you in advance Luc

lucgiffon commented 3 years ago

Hello again, I have made some advances on this question.

I believe the gradient is written in this form because it comes from a more general form that works for any sketching function, as we can see in keriven's thesis equation 4.14. In the case of the sketching function f(x) = e-iW^Tx, the right hand side cancels out but I can imagine there are some for which it wouldn't?

I still think the double normalization is a mistake though, inside and out the jacobian computation.

schellekensv commented 3 years ago

Hello Luc,

Thank you for your remarks.

Hope this helps, Vincent