RosettaCommons / RoseTTAFold

This package contains deep learning models and related scripts for RoseTTAFold
MIT License
2.03k stars 440 forks source link

Using a GPU-capable version of sequence alignment (e.g. comer2) instead of hhblits #102

Open ragunyrasta opened 3 years ago

ragunyrasta commented 3 years ago

Most of my runtime is lost in hhblits which runs on the CPU as opposed to the GPU. Has anyone tried using a GPU-capable equivalent (such as comer2) for this? Any thoughts/comments are appreciated.

davidyanglee commented 2 years ago

Hi Ragunyrasta, similar problem for me. Have you find your answer? I am curious to know. Thanks! David.

ragunyrasta commented 2 years ago

Unfortunately the HHBlits developers have told me that they lost funding to develop a GPU capable version. So I'm about to find out more about Comer2 and try. But in the meantime here's what I've found:

  1. HHBlits is completely memory constrained. The CPU is mostly idle while waiting on memory. So an easy fix may be to ensure that your computer has the max amount of RAM it can take (64GB on standard motherboards, more on the expensive ones)
  2. Given that it's memory bound, unless comer2 has a fundamentally different algorithm, I don't expect it to help much. Graphic cards have much lesser memory than CPUs and their memory is welded, so impossible to upgrade. I hope I'm wrong.
davidyanglee commented 2 years ago

Thank you and this is very helpful! I have 128gb and based on what you said I assume a faster transfer via m2 over pci or sata should help too. Will look into these. Appreciate your help!

D

On Wed, Feb 2, 2022, 9:49 AM ragunyrasta @.***> wrote:

Unfortunately the HHBlits developers have told me that they lost funding to develop a GPU capable version. So I'm about to find out more about Comer2 and try. But in the meantime here's what I've found:

  1. HHBlits is completely memory constrained. The CPU is mostly idle while waiting on memory. So an easy fix may be to ensure that your computer has the max amount of RAM it can take (64GB on standard motherboards, more on the expensive ones)
  2. Given that it's memory bound, unless comer2 has a fundamentally different algorithm, I don't expect it to help much. Graphic cards have much lesser memory than CPUs and their memory is welded, so impossible to upgrade. I hope I'm wrong.

— Reply to this email directly, view it on GitHub https://github.com/RosettaCommons/RoseTTAFold/issues/102#issuecomment-1028014957, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARQC53NIMEEAZJF4VLQSHRLUZE76TANCNFSM5IC7UCYA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you commented.Message ID: @.***>

ragunyrasta commented 2 years ago

Glad that it helped. One thing to try: Set the number of cpus to a valid value in your run*.py script. When it runs hhblits, do a 'top' in your terminal and note down how idle the cpu is. Kill the job. Then increase or decrease the number of CPUs to the script and repeat. Try to find the efficient frontier (i.e. where the cpu is maximally busy) That could likely be the optimal number of CPUs to use for running hhblits in as quick a time as possible on your system.

davidyanglee commented 2 years ago

Thank you for the hint! Will play with the number and watch Conky bars for CPU. Best, David

On Wed, Feb 2, 2022 at 12:30 PM ragunyrasta @.***> wrote:

Glad that it helped. One thing to try: Set the number of cpus to a valid value in your run*.py script. When it runs hhblits, do a 'top' in your terminal and note down how idle the cpu is. Kill the job. Then increase or decrease the number of CPUs to the script and repeat. Try to find the efficient frontier (i.e. where the cpu is maximally busy) That could likely be the optimal number of CPUs to use for running hhblits in as quick a time as possible on your system.

— Reply to this email directly, view it on GitHub https://github.com/RosettaCommons/RoseTTAFold/issues/102#issuecomment-1028179012, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARQC53NTFVDKCOTY2J4NOETUZFS2LANCNFSM5IC7UCYA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you commented.Message ID: @.***>

275145 commented 2 years ago

@ragunyrasta How long does it take you to run rosettafold to predict a protein of 300 or 400 amino acids, and why does it(just the e2e shell) take me up to three or four hours with an A100 card instead of a few minutes as reported?This is really confusing to me because I need to analyze a lot of proteins

davidyanglee commented 2 years ago

I am actually running Alphafold and not Rosetta. 100aa takes me about 15 min (features, cpu) + 4min (gpu) x 5 models = 35 min or so. The "reduced size db" doesn't speed this up for me. For comparison, a trimer of 900aa+100aa+100aa takes about 150min (feature, cpu) + 150min (gpu) x 5 = ~15 hrs. My setup is a 16core AMD Ryzen 9 CPU (for each feature I apply 8 threads/4 cores), GPU RTX2060 super. In my hands for small proteins the CPU that's the most computationally costly compare to GPU; for larger proteins the GPU time goes up significantly in comparison to feature run by CPU. Best, David

On Sun, Feb 6, 2022 at 10:16 AM 275145 @.***> wrote:

@ragunyrasta https://github.com/ragunyrasta How long does it take you to run rosettafold to predict a protein of 300 or 400 amino acids, and why does it(just the e2e shell) take me up to three or four hours with an A100 card instead of a few minutes as reported?This is really confusing to me because I need to analyze a lot of proteins

— Reply to this email directly, view it on GitHub https://github.com/RosettaCommons/RoseTTAFold/issues/102#issuecomment-1030852582, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARQC53LH7TOCVO7DQSQLI43UZ2GENANCNFSM5IC7UCYA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you commented.Message ID: @.***>