Zuricho / ParallelFold

Modified version of Alphafold to divide CPU part (MSA and template searching) and GPU part. This can accelerate Alphafold when predicting multiple structures
https://parafold.sjtu.edu.cn
133 stars 45 forks source link

Run on GPU Error #39

Open erdaqorri opened 1 year ago

erdaqorri commented 1 year ago

Hi, Thank you for developing Parafold.

I successfully ran the CPU part using the script you provided, however when I try to run the GPU part I get the error attached to the file here. This is the script that I am using to submit the job to the cluster.

!/bin/bash

SBATCH --job-name=parafold_gpu # Remove the file extension from the input filename

SBATCH --output=parafold_gpu.out # Remove the file extension from the input filename

SBATCH --nodes=1

SBATCH --cpus-per-task=8

SBATCH --mem=80GB

SBATCH --partition=gpu

SBATCH --gres=gpu:1

./run_alphafold.sh \ -d /home/p_af2qe/monomer_af2_db \ -o output \ -m model_1,model_2,model_3,model_4,model_5 \ -p monomer_ptm \ -i /home/p_af2qe/ParallelFold/input/mono_set1/GA98.fasta \ -t 1800-01-01 \

Thank you for your help!

Alexa

parafold_gpu_part_err.txt

hermannschwaerzlerUIBK commented 1 year ago

Hi Alexa,

I had the very same problem. I was able to solve it like this:

This installs some missing utilities, one of which is ptxas.

Regards, Hermann

erdaqorri commented 1 year ago

Hi Hermann,

Thank you for your reply! I have just installed the version of cuda you mentioned. I am waiting for the job to run. I will most likely provide a follow up comment in case it works (for future users).

Cheers, Alexa

erdaqorri commented 1 year ago

@hermannschwaerzlerUIBK

I also struggled to make the singularity image work for about 2 months. However, only two weeks ago I managed to successfully run in our HPC using the nondocker version (https://github.com/kalininalab/alphafold_non_docker). It is very easy to set up, it seems stable, and it works both on CPU and gpu partitions for me.

P.S it is not the 2.3.2 version but the 2.3.1

I hope this helps, feel free to ask if you need some suggestions.

Cheers, Alexa