dptech-corp / Uni-Dock

Uni-Dock: a GPU-accelerated molecular docking program
188 stars 40 forks source link

Same ligands, same protein PDBQTs - Only fast mode is working #5

Open andreacarotti331 opened 1 year ago

andreacarotti331 commented 1 year ago

Hi again using the same config file ( with the same receptor, the same ligands' pdbqts) I have outputs (docking poses) only when using the fast search_mode, indeed in detail and balance modes I don't have any pose. Here is the error I receive for every ligand

WARNING: Could not find any conformations completely within the search space. WARNING: Check that it is large enough for all movable atoms, including those in the flexible side chains. WARNING: Or could not successfully parse PDBQT input file of ligand #0

The search space is the same of the fast mode . Here is an example config file receptor = protein.pdbqt dir = ./out_detail ligand_index = ligand_index.txt num_modes = 3 center_x = 6.73 center_y = 29.95 center_z = 60.26 size_x = 35 size_y = 35 size_z = 35 seed = 1234 search_mode = detail

I'm running on a RTXA4000 (16GB) and rocky 8.5 Thanks Andrea

pkuyyj commented 1 year ago

Hi Andrea, Since the warning repeats for every ligand, it is probably caused by insufficient GPU global memory. Please use --max_gpu_memory (in MB) to limit the maximum memory usage of Uni-Dock. If one value doesn't work, please lower the value. This problem is mostly due to different launching setting of different types of GPUs and we only have time and effort to test several common GPUs. If you have a good value of maximum memory usage of RTXA4000, you are welcome to report here for future users. Yuejiang

andreacarotti331 commented 1 year ago

Hi, thanks.. The RTXA4000 has 16GB of memory. When unidock runs, it fills up all the available memory, despite the max_gpu_memory I specify. However, if I specify max_gpu_memory 12000, it runs and I have the docking poses, even If (I repeat) the memory usage I check by nvidia-smi is always at maximum (15878). My bests Andrea

pkuyyj commented 1 year ago

Thanks for your report. Don't worry, this situation is normal. Since the memory usage patterns on different GPUs (especially RTX series) vary, it's hard to predict the exactly memory of CUDA threads using linear regression, which is done manually and integrated in Uni-Dock. Therefore, max_gpu_memory can be inaccurate and we recommend choosing the value that doesn't cause breakdown. If we have access to RTXA4000, the regression function can be improved. Yuejiang

tom832 commented 1 year ago

meet the same issue with 4090(24G)

andreacarotti331 commented 1 year ago

Did you play setting the max_gpu_memory to lower values e.g. 20000 or less..Do some tests lowering until you get the docking poses...it worked for me. On my RTXA4000 (16GB) I used 12000. My bests

tom832 commented 1 year ago

Did you play setting the max_gpu_memory to lower values e.g. 20000 or less..Do some tests lowering until you get the docking poses...it worked for me. On my RTXA4000 (16GB) I used 12000. My bests

It worked well when my ligand number was less than 45 (with no max_gpu_memory limit), but when the ligands increased, it failed and reported WARNING: Could not find any conformations completely within the search space. WARNING: Check that it is large enough for all movable atoms, including those in the flexible side chains. WARNING: Or could not successfully parse PDBQT input file of ligand #0. And when I set max_gpu_memory lower than 20000, it failed and reported ERROR: Empty ligand list

pkuyyj commented 1 year ago

Did you play setting the max_gpu_memory to lower values e.g. 20000 or less..Do some tests lowering until you get the docking poses...it worked for me. On my RTXA4000 (16GB) I used 12000. My bests

It worked well when my ligand number was less than 45 (with no max_gpu_memory limit), but when the ligands increased, it failed and reported WARNING: Could not find any conformations completely within the search space. WARNING: Check that it is large enough for all movable atoms, including those in the flexible side chains. WARNING: Or could not successfully parse PDBQT input file of ligand #0. And when I set max_gpu_memory lower than 20000, it failed and reported ERROR: Empty ligand list

Copy. The best way to solve this problem is recalculate memory function on RTXA4000. However, we don't have this GPU. Could you please let me use your GPU and correct this?

YumizSui commented 11 months ago

Hi, I seem to be facing a similar problem in my environment (Tesla P100-SXM2-16GB). (it works on my RTX3090) For instance, even when I try to specify the maximum GPU memory with --max_gpu_memory 10000, it looks like the program attempts to execute all the ligands provided in ligand_index concurrently in a single batch. : Batch 1 size: 200 // number of ligands specified in ligand_index Moreover, if I set a smaller value like --max_gpu_memory 5000, the output I receive is ERROR: Empty ligand list.. However, it worked if I reduced the number of ligands specified in ligand_index. I believe it would be more useful if I could explicitly set the maximum number of ligands per batch via a parameter like --max_batch. My bests

ysyecust commented 6 months ago

Hi @YumizSui In https://github.com/dptech-corp/Uni-Dock/pull/77, We have updated the gpu memory allocation mechanism, which can now better automatically allocate batch sizes, you are welcome to try the new version.