Open Eugene29 opened 10 months ago
Thanks for the experiment and reproduction, the speedups look impressive!
On the other hand, hardcoding cupy
as the main linalg library would mean that the code can only run on a GPU without a chance to debug it on a CPU - this is a hard, breaking change of many existing pipelines so I am not sure the PR can be accepted in the existing state.
Instead, users should decide on their own whether they'd like to use it on the hardware they have. Perhaps there is a way to re-write those functions in the device-agnostic manner as suggested by cupy here in the docs?
Sorry for the delay. I worked on the device-agnostic part. Now it can run either on GPU or CPU. By default, the preprocessing will be done in GPU, but it will be done in Numpy(CPU) if you do not have a GPU or if you wish to override the default behavior by passing prep_w_GPU False
as configs.
prep_w_GPU False Default:
Dear Authors, Hi, I'm a student from UCSD, and as I try to extend on you guys' GPS architecture for one of my projects, I was able to shorten down the preprocess by ~60 times.
I have checked that the eigenvectors are approximately the same up to a sign flip and the np.sum difference between eigvalues are negligble. Furthermore, I was also able to reproduce one of the results (peptides-struc) w/ this sped up preprocess. Hope this helps!
One question though is that I noticed for the L_heat, that you guys are using
normalization=None
. Is this as intended?experiments
checking eigvectors from np.eigh and cp.eigh(two lists in a contiguous manner)
result reproduced