MD-Studio / cerise-mdstudio-das5

A specialisation of cerise for MDStudio and DAS5
Apache License 2.0
0 stars 0 forks source link

Gromacs Error #4

Closed felipeZ closed 6 years ago

felipeZ commented 6 years ago

I have been using the following python script to run jobs with Cerise-client:

from time import sleep
import cerise_client.service as cc

def remove_previous(srv):
    """Remove old jobs"""
    jobs = srv.list_jobs()
    print(jobs)
    if jobs:
        for j in jobs:
            print("removing: ", j.name)
            srv.destroy_job(j)

# Create a new service for user myuser, with given cluster credentials
srv = cc.require_managed_service(
        'cerise-mdstudio-das5-myuser', 29593,
        'mdstudio/cerise-mdstudio-das5:develop',
        'user',
        'passwd')

cc.start_managed_service(srv)
# Create a job and set workflow and inputs
remove_previous(srv)
job = srv.create_job('example_job2')
job.set_workflow('md_workflow.cwl')
job.add_input_file('protein_pdb', 'CYP19A1vs.pdb')
job.add_input_file('protein_top', 'CYP19A1vs.top')
job.add_input_file('protein_itp', 'CYP19A1vs-posre.itp')
job.add_input_file('ligand_pdb', 'BHC89.pdb')
job.add_input_file('ligand_top', 'BHC89.itp')
job.add_input_file('ligand_itp', 'BHC89-posre.itp')
job.set_input('force_field', 'amber99SB')
job.set_input('sim_time', 0.001)

# Start it
job.run()

# Give the service a chance to stage things
while job.state == 'Waiting':
    sleep(1)

# store this somewhere, in a database
persisted_srv = cc.service_to_dict(srv)
persisted_job_id = job.id          # this as well

# Stop the service
cc.stop_managed_service(srv)

# Here, you would quit Python, shut down the computer, etc.

# To resume where we left off
cc.start_managed_service(srv)
srv = cc.service_from_dict(persisted_srv)
job = srv.get_job_by_id(persisted_job_id)

# Wait for job to finish
while job.is_running():
    sleep(10)

# Process output
if job.state == 'Success':
    job.outputs['trajectory'].save_as('CYP19A1vs_BHC89.trr')
else:
    print('There was an error: ' + job.state)
    print(job.log)

# Clean up the job and the service
srv.destroy_job(job)
cc.destroy_managed_service(srv)

Suddenly, The following error is trown:

Workflow did not produce a value for at least output gromacslog
Final process status is permanentFail

Due to the following error in gromit:

which: no gsed in (/home/user/.cerise/api/files/mdstudio/github/cerise-mdstudio-das
5/mdstudio/gromacs/gromacs-2016.3/bin:/cm/shared/apps/openmpi/gcc/64/1.10.1/bin:/cm/lo
cal/apps/cuda/libs/current/bin:/cm/shared/apps/cuda75/sdk/7.5.18/bin/x86_64/linux/rele
ase:/cm/shared/apps/cuda75/toolkit/7.5.18/bin:/cm/shared/apps/cuda75/gdk/352.79/nvidia
-healthmon:/home/fzapata/emacs-25.3/bin:/home/fzapata/anaconda3/bin:/cm/shared/apps/sl
urm/15.08.6/sbin:/cm/shared/apps/slurm/15.08.6/bin:/cm/local/apps/gcc/5.2.0/bin:/usr/l
ocal/bin:/usr/bin:/opt/ibutils/bin:/sbin:/usr/sbin:/cm/local/apps/environment-modules/
3.2.10/bin)
which: no voms-proxy-info in (/home/user/.cerise/api/files/mdstudio/github/cerise-m
dstudio-das5/mdstudio/gromacs/gromacs-2016.3/bin:/cm/shared/apps/openmpi/gcc/64/1.10.1
/bin:/cm/local/apps/cuda/libs/current/bin:/cm/shared/apps/cuda75/sdk/7.5.18/bin/x86_64
/linux/release:/cm/shared/apps/cuda75/toolkit/7.5.18/bin:/cm/shared/apps/cuda75/gdk/35
2.79/nvidia-healthmon:/home/user/emacs-25.3/bin:/home/user/anaconda3/bin:/cm/sha
red/apps/slurm/15.08.6/sbin:/cm/shared/apps/slurm/15.08.6/bin:/cm/local/apps/gcc/5.2.0
/bin:/usr/local/bin:/usr/bin:/opt/ibutils/bin:/sbin:/usr/sbin:/cm/local/apps/environme
nt-modules/3.2.10/bin)
cp: ‘/tmp/cerise_runner_w_bj1ie7/BHC89.itp’ and ‘./BHC89.itp’ are the same file
cp: ‘/tmp/cerise_runner_w_bj1ie7/BHC89-posre.itp’ and ‘./BHC89-posre.itp’ are the same
 file
cp: ‘/tmp/cerise_runner_w_bj1ie7/CYP19A1vs-posre.itp’ and ‘./CYP19A1vs-posre.itp’ are 
the same file
  File "<string>", line 1
    print int(1000*0.001/0.004 + 0.5 )
            ^
SyntaxError: invalid syntax
  File "<string>", line 1
    print int(1000*0.05/0.004 + 0.5)
            ^
SyntaxError: invalid syntax
..................
Command line:
  gmx editconf -f /tmp/cerise_runner_w_bj1ie7/BHC89.pdb -o BHC89.pdb.gro

  File "<string>", line 1
    print 0.5*2.25
            ^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(0.5*2.25)?

Any idea of what's going on?

felipeZ commented 6 years ago

Same behavior in python 2.7 and 3.5

LourensVeen commented 6 years ago

That's weird, it's a Python error, but Gromacs is entirely C++. Maybe gromit uses Python to calculate something, and only works with Python 2.7; I'm using Python 3 on the cluster because cwltiny requires it. I'll be back to Cerise and this project on Monday, I'll have a look at it then.

felipeZ commented 6 years ago

I have found that the error is trigger by the following line in my .bashrc:

export PATH="/home/user/miniconda3/bin:$PATH"                                                                

If I comment that line everything goes well.

LourensVeen commented 6 years ago

Ah, yes, gromit does use Python as a fancy calculator, and those Python statements won't work in Python 3, because the print statement is missing parenthesis. And the default Python on the DAS-5 is Python 2.7, so I guess when gromit is called it uses that. That line in your .bashrc probably changes the default to Python 3, which then makes it not work.

I think the correct solution is to add a module load python/2.7.13 line to mdstudio/gromit/call_gromit.sh so that it always gets Python 2. Oh and of course the correct correct solution is to replace gromit, but we'll get to that :).