Acellera / htmd

HTMD: Programming Environment for Molecular Discovery
https://software.acellera.com/docs/latest/htmd/index.html
Other
254 stars 58 forks source link

Adaptative MD on local gpu #1047

Closed vas2201 closed 1 year ago

vas2201 commented 1 year ago

Hello, I am running the Jupyter notebook on a local GPU (not submitting jobs), and when I use the command "acemd3 input" inside the folder of one of the adaptive MDs, it works. However, when I use the following command, I don't get any output: local = LocalGPUQueue() local.submit('./prod/') local.wait() Can you please suggest how I can run the adaptiveMD without submitting a job on a local GPU?.

Thanks Vas

stefdoerr commented 1 year ago

is there a run.sh file in the ./prod/ folder? what does it contain?

vas2201 commented 1 year ago

Please discard the above message to avoid confusion. I attached the jpg of the jupyter notebook (adaptive protocol) as a drive link for your check. I am doing an adaptive MD on target RNA on local gpu (not submitting the job on hpc). Looks like it is running, but there is no log file ot xtc file in the directory (adaptivemd/input/e1s9_generators).

Could you please suggest to me to fix the problem?

Regards Vas

google drive link : jupyter notebook


https://drive.google.com/file/d/1rtmgQ-LCwjLcrWPl3kK-rFRi2SyZemp_/view?usp=share_link

run.sh


!/bin/bash

acemd3 >log.txt 2>&1

output :


ad.run() 2023-02-21 18:14:20,572 - htmd.adaptive.adaptive - INFO - Processing epoch 0 2023-02-21 18:14:20,572 - htmd.adaptive.adaptive - INFO - Epoch 0, generating first batch 2023-02-21 18:14:20,576 - htmd.adaptive.adaptive - INFO - Generators folder has no subdirectories, using folder itself 2023-02-21 18:14:21,701 - jobqueues.util - INFO - Trying to determine all GPU devices 2023-02-21 18:14:21,741 - jobqueues.util - INFO - GPU devices requested: 0 2023-02-21 18:14:21,741 - jobqueues.util - INFO - GPU devices visible: 1 2023-02-21 18:14:21,742 - jobqueues.localqueue - INFO - Using GPU devices 2023-02-21 18:14:21,743 - jobqueues.util - INFO - Trying to determine all GPU devices 2023-02-21 18:14:21,765 - jobqueues.localqueue - INFO - Queueing /home/nmrbox/spenumutchu/Desktop/Projects/aufr12_project/SL2_WT/hmtd_sl2wt/adaptive_md/adaptivemd/input/e1s1_generators 2023-02-21 18:14:21,768 - jobqueues.localqueue - INFO - Queueing /home/nmrbox/spenumutchu/Desktop/Projects/aufr12_project/SL2_WT/hmtd_sl2wt/adaptive_md/adaptivemd/input/e1s2_generators 2023-02-21 18:14:21,770 - jobqueues.localqueue - INFO - Queueing /home/nmrbox/spenumutchu/Desktop/Projects/aufr12_project/SL2_WT/hmtd_sl2wt/adaptive_md/adaptivemd/input/e1s3_generators 2023-02-21 18:14:21,772 - jobqueues.localqueue - INFO - Queueing /home/nmrbox/spenumutchu/Desktop/Projects/aufr12_project/SL2_WT/hmtd_sl2wt/adaptive_md/adaptivemd/input/e1s4_generators 2023-02-21 18:14:21,774 - jobqueues.localqueue - INFO - Queueing /home/nmrbox/spenumutchu/Desktop/Projects/aufr12_project/SL2_WT/hmtd_sl2wt/adaptive_md/adaptivemd/input/e1s5_generators 2023-02-21 18:14:21,776 - jobqueues.localqueue - INFO - Queueing /home/nmrbox/spenumutchu/Desktop/Projects/aufr12_project/SL2_WT/hmtd_sl2wt/adaptive_md/adaptivemd/input/e1s6_generators 2023-02-21 18:14:21,778 - jobqueues.localqueue - INFO - Queueing /home/nmrbox/spenumutchu/Desktop/Projects/aufr12_project/SL2_WT/hmtd_sl2wt/adaptive_md/adaptivemd/input/e1s7_generators 2023-02-21 18:14:21,780 - jobqueues.localqueue - INFO - Queueing /home/nmrbox/spenumutchu/Desktop/Projects/aufr12_project/SL2_WT/hmtd_sl2wt/adaptive_md/adaptivemd/input/e1s8_generators 2023-02-21 18:14:21,782 - jobqueues.localqueue - INFO - Queueing /home/nmrbox/spenumutchu/Desktop/Projects/aufr12_project/SL2_WT/hmtd_sl2wt/adaptive_md/adaptivemd/input/e1s9_generators 2023-02-21 18:14:21,784 - jobqueues.localqueue - INFO - Queueing /home/nmrbox/spenumutchu/Desktop/Projects/aufr12_project/SL2_WT/hmtd_sl2wt/adaptive_md/adaptivemd/input/e1s10_generators 2023-02-21 18:14:21,786 - htmd.adaptive.adaptive - INFO - Sleeping for 120 seconds. 2023-02-21 18:16:21,887 - htmd.adaptive.adaptive - INFO - Processing epoch 1 2023-02-21 18:16:21,888 - htmd.adaptive.adaptive - INFO - Retrieving simulations. 2023-02-21 18:16:21,889 - htmd.adaptive.adaptive - INFO - 10 simulations in progress 2023-02-21 18:16:21,889 - htmd.adaptive.adaptive - INFO - Sleeping for 120 seconds. 2023-02-21 18:18:21,992 - htmd.adaptive.adaptive - INFO - Processing epoch 1 2023-02-21 18:18:21,992 - htmd.adaptive.adaptive - INFO - Retrieving simulations. 2023-02-21 18:18:21,993 - htmd.adaptive.adaptive - INFO - 10 simulations in progress 2023-02-21 18:18:21,993 - htmd.adaptive.adaptive - INFO - Sleeping for 120 seconds. 2023-02-21 18:20:22,091 - htmd.adaptive.adaptive - INFO - Processing epoch 1

vas2201 commented 1 year ago

The issue is resolved, once updated the following script. queue = LocalGPUQueue() queue.datadir = './data' queue = LocalGPUQueue() queue.devices = [1,2,4] ad = AdaptiveMD() ad.app = queue ad.nmin = 5 ad.nmax = 10 ad.nepochs = 30 I can see the log files and and xtc files the simulation in /input/e1s1_sl2_10ns_1/ folder (single gpu) now.

I use the following command (All Proton distances) to construct MSM.

protsel = 'nucleic and name H**' ad.projection = MetricSelfDistance(protsel)

ad.projection = MetricRmsd(trajrmsdstr='nucleic and noh', refmol=Molecule('generators/structure.pdb'))

Does name H** include all the proton distances ?

Thanks for your help. Regards Vas

stefdoerr commented 1 year ago

hmmm I'm not sure what H** does in atomselect. You want to get the distances of all hydrogens to each other in your nucleic acid? If yes I'd do nucleic and element H. But keep in mind this will create an insanely large distance matrix. If you have i.e. 500 hydrogens this is 500*499/2=124750 distances for each simulation frame. You will run out of memory very fast and crash your computer. You probably need to select some better metric for your conformations, like the distances of some backbone atom or of some specific protons (like one per residue only)

vas2201 commented 1 year ago

Thank you.