jasonkyuyim / se3_diffusion

Implementation for SE(3) diffusion model with application to protein backbone generation
https://arxiv.org/abs/2302.02277
MIT License
305 stars 50 forks source link

issue about amp and rigid transform #31

Closed WeianMao closed 5 months ago

WeianMao commented 1 year ago

thanks for this amazing work. however, in the file 'openfold/utils/rigid_utils.py', I found some matrix multiplication is implemented in a strange way. For example,

return torch.stack(
    [
        r[..., 0, 0] * x + r[..., 0, 1] * y + r[..., 0, 2] * z,
        r[..., 1, 0] * x + r[..., 1, 1] * y + r[..., 1, 2] * z,
        r[..., 2, 0] * x + r[..., 2, 1] * y + r[..., 2, 2] * z,
    ],
    dim=-1,
)

why don't using the operator 'A@B'? if u want to disable amp, why not use 'with torch.cuda.amp.autocast(enabled=False):'? thanks.

jasonkyuyim commented 1 year ago

Hi, I don't have a good answer. I borrowed the code from openfold which has it implemented like this. PhD life has led me to believe if it ain't broke then don't touch it. That said if there is an optimization or A@B is better then I see no reason why not.