Closed marioernestovaldes closed 1 year ago
When running PB and decomp for some reason is not printing the residues in the *.mdin files...
fixed 60bf041c31c4da4e114b66f13cb65214ffb35f1b
please update gmx_MMPBSA as follows:
python -m pip install git+https://github.com/Valdes-Tresanco-MS/gmx_MMPBSA -U
Okay, thank you . I will check it up
I tried to run the command above but I got this error:
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
did you activate the gmx_MMPBSA environment before running the command?
Yes, I did
The new version requires python 3.9...since you have an older version... could you please check with conda list
? if so, try updating python to 3.9 and run the command again
Yes, you are right. I have python 3.8. So after I upgraded it to 3.9, I was able to run the command successfully
So Let me try to run the calculations again
Still the same error:
unning calculations on normal system... [INFO ] Beginning PB calculations with /cvmfs/soft.computecanada.ca/easybuild/software/2020/avx2/MPI/gcc9/openmpi4/ambertools/21/bin/sander [INFO ] calculating complex contribution... 0%| | 0/20 [elapsed: 00:00 remaining: ?]At line 143 of file /tmp/ebuser/avx2/AmberTools/21/foss-2020a/amber20_src/AmberTools/src/sander/rgroup.F90 (unit = 5, file = '_GMXMMPBSA_pb_decomp_com.mdin') Fortran runtime error: End of file
Error termination. Backtrace:
[ERROR ] CalcError /cvmfs/soft.computecanada.ca/easybuild/software/2020/avx2/MPI/gcc9/openmpi4/ambertools/21/bin/sander failed with prmtop COM.prmtop!
If you are using sander and PB calculation, check the *.mdout files to get the sander error
.
Check the gmx_MMPBSA.log file to report the problem.
File "/scratch/oladayo/Dynamics/LIG2/L1/venv_gmxMMPBSA/bin/gmx_MMPBSA", line 8, in
If you are using sander and PB calculation, check the *.mdout files to get the sander error
Check the gmx_MMPBSA.log file to report the problem. Exiting. All files have been retained.
I was wondering if you could send me a standard .in file that you have used in the past for one of your publications on protein-ligand complexes instead?
Could you please attach the gmx_MMPBSA.log file... regarding the standard, there is not really a standard file... more like default parameters and these should work with most systems... and the default parameter are already set when generating the .in file with gmx_MMPBSA --create_input
Okay, thanks again for your supports. Here is the log file log.zip
but here gmx_MMPBSA is not updated... is still in version 1.6.0 when it should be on v1.6.0+4.g9351537
But I ran this command(python -m pip install git+https://github.com/Valdes-Tresanco-MS/gmx_MMPBSA -U) after updating my python module to 3.9 and it was successful.
seems you weren't working in the right environment because in the log file you sent me the gmx_MMPBSA is still v.1.6.0 and python 3.8.10
Oh, it seems I'm doing something wrong then
Can't see in the picture if you are in the gmx_MMPBSA environment
I don't know to how check if I'm in the gmx_MMPBSA environment
are you using some sbatch file in compute canada? can you send this file to see how you are running gmx_MMPBSA?
Here is it ...
module purge
module load gcc/9.3.0 python/3.8 ambertools/21 gromacs/2021.4 qt/5.15.2
source venv_gmxMMPBSA/bin/activate
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
echo "Starting run at: date
"
mpirun -np 24 gmx_MMPBSA -O -i mmpbsa.in -cs step5_production.tpr -ci index2.ndx -cg 18 14 -ct md_0_300_noPBC.xtc -cp topol.top
Do you have an isolated environment for gmx_MMPBSA in the scratch folder?
Yes, I do. I run the sbatch script from the folder where I have the isolated environment
if you want to run gmx_MMPBSA in your local folder, how do you access gmx_MMPBSA? do you activate the environment?
I use these commands:
virtualenv venv_gmxMMPBSA source venv_gmxMMPBSA/bin/activate
Each time I use gb and decomp, I am able to get results i.e, it runs successfully without any error. I only face the error when I try to run pb and decomp.
At the moment we have never configured a virtualenv. Here, there are several possible compatibility issues. I need to know several things related to your environment:
Just with your report, we realize that it is not a good practice to use conda in HPCs. We need to review the options related to virtualenv, singularity, or docker to make gmx_MMPBSA packages according to HPCs policies.
I follow the instruction provided by alliancecanada in activating the environment(https://docs.alliancecan.ca/wiki/GROMACS) :
gmx_MMPBSA gmx_MMPBSA[18] is a tool based on AMBER's MMPBSA.py aiming to perform end-state free energy calculations with GROMACS files.
Other than the older G_MMPBSA[17], which is only compatible with older versions of GROMACS, gmx_MMPBSA can be used with current versions of GROMACS and AmberTools.
Please be aware that gmx_MMPBSA uses implicit solvents and there have been studies[19] that conclude that there are issues with the accuracy of these methods for calculating binding free energies.
Installing gmx_MMPBSA into a virtualenv The following has been tested with a combination of gmx_MMPBSA 1.5.0.3, gromacs/2021.4 and ambertools/21. While this should work with other recent versions of GROMACS, currently AmberTools 21 is the only version that is expected to work.
$ module purge $ module load gcc/9.3.0 python/3.8 gromacs/2021.4 $ module load ambertools/21 $ virtualenv venv_gmxMMPBSA $ source venv_gmxMMPBSA/bin/activate
$ pip install --no-index "numpy~=1.22.0" gmx_MMPBSA $ python -m pip install git+https://github.com/ParmEd/ParmEd.git@16fb236 Please note that ParmEd version up to 3.4.3 contain a bug that was fixed in commit 16fb236 Until version greater than 3.4.3 has been released we need to use this unreleased version.
$ module load qt/5.15.2
$ gmx_MMPBSA -h $ gmx_MMPBSA_test -ng -n 4 Fortunately, running the self-test is very quick, therefore it's permissible to run them on the login node.
Later when using gmx_MMPBSA in a job you need to load the modules and activate the virtualenv as follows:
module purge module load gcc/9.3.0 python/3.8 ambertools/21 gromacs/2021.4 qt/5.15.2 source venv_gmxMMPBSA/bin/activate
Follow these instructions to install the new gmx_MMPBSA on compute Canada (now digital alliance):
$ module purge
$ module load gcc/9.3.0 python/3.9.6 gromacs/2022.3
$ module load ambertools/21
$ virtualenv venv_gmxMMPBSA --python /cvmfs/soft.computecanada.ca/easybuild/software/2020/avx2/Core/python/3.9.6/bin/python
$ source venv_gmxMMPBSA/bin/activate
$ pip install --no-index "numpy~=1.22.0"
$ python -m pip install git+https://github.com/Valdes-Tresanco-MS/gmx_MMPBSA -U
$ python -m pip install git+https://github.com/Valdes-Tresanco-MS/ParmEd.git@v3.4
$ gmx_MMPBSA -h
$ gmx_MMPBSA_test -ng -n 4
Later when using gmx_MMPBSA in a job you need to load the modules and activate the virtualenv as follows:
module purge
module load gcc/9.3.0 python/3.9.6 gromacs/2022.3 qt/5.15.2
source venv_gmxMMPBSA/bin/activate
let us know if that works correctly...
It works well now. Thanks a lot for your help.
Please I have a personal question; which of the polar solvation models (PB/GB) will you recommend based on your experience, Sir, for publication ? I understand GB is fast but not sure if its calculations are as accurate as PB. Also when I use GB in my calculations, I tend to get quit large negative delta values as against a relatively small delta negative values obtained when PB is used (at times positive even after increasing the dielectric constant). This made me conclude that I should go with GB in my calculations but I'm worried that reviewers may question me on why I didn't use PB instead and I'm not sure if the aforementioned reason is convincing enough. Please kindly let me know your thought on this.
Thank you
There is a lot of debate about using either PB or GB. The fact is there is not enough conclusion to say which one is better (although PB is theoretically more rigorous and GB is an analytical approximation). That being said, since you are using charmm, the go-to would be PB. I personally use these end-point methods to calculate and report relative differences between 2 or more systems rather than absolute ones. That way, it doesn't matter that much which model you use as the error will cancel out.
hope this helps!
Thanks a lot. This is very helpful. But If I may ask what do you mean by "report relative differences between 2 or more systems rather than absolute ones" ?
Thanks
Let's say you have two complexes, one with experimental DG of -10 kcal/mol and another with -8 kcal/mol... these are absolute values and indeed difficult to reproduce using MMGBSA or MMPBSA... However, you can try to reproduce the relative difference, which is 2 kcal.mol (DG2-DG1= -8-(-10)). If you get let's say -55 kcal/mol for the first one and -53 kcal/mol for the second one, the relative difference will be 2 kcal/mol (DG2-DG1= -53-(-55)) same as when using absolute values
Oh, I see. I understand now .
Thank you
How about in a situation where I don't have experimental DG values for my complexes but only docking score energy values are available ?
well, that is tougher as you don't have ground truth to compare against
Thank you so much for your time. I've been able to learn a lot from you. I think you can close the issue.
great!
[INFO ] Running calculations on normal system... [INFO ] Beginning PB calculations with /cvmfs/soft.computecanada.ca/easybuild/software/2020/avx2/MPI/gcc9/openmpi4/ambertools/21/bin/sander [INFO ] calculating complex contribution... 0%| | 0/20 [elapsed: 00:00 remaining: ?]At line 143 of file /tmp/ebuser/avx2/AmberTools/21/foss-2020a/amber20_src/AmberTools/src/sander/rgroup.F90 (unit = 5, file = '_GMXMMPBSA_pb_decomp_com.mdin') Fortran runtime error: End of file
Error termination. Backtrace:
0 0x2ad841dcd730 in ???
1 0x2ad841dce289 in ???
2 0x2ad841dcef6f in ???
3 0x2ad84200276b in ???
4 0x2ad842002d62 in ???
5 0x2ad841fff49b in ???
6 0x2ad842004444 in ???
7 0x2ad84200550b in ???
8 0x6c1b65 in ???
9 0x64a9b5 in ???
10 0x5ff859 in ???
11 0x5fd743 in ???
12 0x5fd799 in ???
13 0x2ad8420eee1a in ???
14 0x44e8e9 in ???
15 0xffffffffffffffff in ???
[ERROR ] CalcError /cvmfs/soft.computecanada.ca/easybuild/software/2020/avx2/MPI/gcc9/openmpi4/ambertools/21/bin/sander failed with prmtop COM.prmtop!
If you are using sander and PB calculation, check the *.mdout files to get the sander error . Check the gmx_MMPBSA.log file to report the problem. File "/home/oladayo/venv_gmxMMPBSA/bin/gmx_MMPBSA", line 8, in
sys.exit(gmxmmpbsa())
File "/home/oladayo/venv_gmxMMPBSA/lib/python3.8/site-packages/GMXMMPBSA/app.py", line 101, in gmxmmpbsa
app.run_mmpbsa()
File "/home/oladayo/venv_gmxMMPBSA/lib/python3.8/site-packages/GMXMMPBSA/main.py", line 205, in run_mmpbsa
self.calc_list.run(rank, self.stdout)
File "/home/oladayo/venv_gmxMMPBSA/lib/python3.8/site-packages/GMXMMPBSA/calculation.py", line 142, in run
calc.run(rank, stdout=stdout, stderr=stderr)
File "/home/oladayo/venv_gmxMMPBSA/lib/python3.8/site-packages/GMXMMPBSA/calculation.py", line 625, in run
GMXMMPBSA_ERROR('%s failed with prmtop %s!\n\t' % (self.program, self.prmtop) +
File "/home/oladayo/venv_gmxMMPBSA/lib/python3.8/site-packages/GMXMMPBSA/exceptions.py", line 169, in init
raise exc(msg + '\nCheck the gmx_MMPBSA.log file to report the problem.')
CalcError: /cvmfs/soft.computecanada.ca/easybuild/software/2020/avx2/MPI/gcc9/openmpi4/ambertools/21/bin/sander failed with prmtop COM.prmtop!
If you are using sander and PB calculation, check the *.mdout files to get the sander error
Check the gmx_MMPBSA.log file to report the problem. Exiting. All files have been retained.
_Originally posted by @Ridwan20-alt in https://github.com/Valdes-Tresanco-MS/gmx_MMPBSA/issues/350#issuecomment-1456554202_