adavoudi / spdnet

Implementation of Deep SPDNet in pytorch
MIT License
58 stars 11 forks source link

Retraction operation #10

Closed Blupblupblup closed 5 years ago

Blupblupblup commented 5 years ago

Hi,

I'm new to geometric deep learning and I'm wondering why you chose to realize the retraction operation with:

data = A + ref
Q, R = data.qr()
sign = (R.diag().sign() + 0.5).sign().diag()
out = Q.mm(sign)

instead of

data = A + ref
Q, R = data.qr()
out = Q

as is suggested in equation 4.8 page 59 of Optimization Algorithms on Matrix Manifolds (Absil, Mahony, Sepulchre), which is the reference of the original SPDNet paper for this exact operation.

In any case, I take advantage of this post to thank you for your public PyTorch implementation of the paper !

Cheers, Blupon.

adavoudi commented 5 years ago

Hi, Just to avoid (any possible) negative values in the output matrix.

Best,

Blupblupblup commented 5 years ago

Thank you for your answer. Could you be more accurate regarding your motivation to choose this specific way to avoid negative values please (why use R) ? For example, you could have just added abs(min(Q)) to the whole matrix if I'm not mistaken.

Sorry if this question stems from insufficient knowledge of applied maths.

adavoudi commented 5 years ago

I think the main reason was to make my outputs compatible with the original Matlab implementation. Maybe there is a difference in the implementation of the QR decomposition between Matlab and PyTorch.

Anyway, it was the only approach to reproduce the original paper's results. However, I think what you mentioned is also totally correct.

Blupblupblup commented 5 years ago

Okay, that answers my question. Thanks again !