Open chenshixinnb opened 2 years ago
Did you move (or rename) the colabfold_batch directory from somewhere? If you moved it, the minimization step will fail.
Did you move (or rename) the colabfold_batch directory from somewhere? If you moved it, the minimization step will fail.
no,I redeployed it today and the problem also occurred
I installed it from a script,After the installation is complete、export PATH、run
Please show me the details:
nvidia-smi -L
and /path/to/nvcc --version
colabfold_batch --amber --template ...
and its complete log messages, including the git commit version of colabfold.[cloudam@master ~]$ nvidia-smi -L GPU 0: Tesla T4 (UUID: GPU-05c88ce7-ee31-0f33-2019-262c41c84afe)
[cloudam@master ~]$ /public/software/.local/easybuild/software/CUDA/11.1.1/bin/nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Mon_Oct_12_20:09:46_PDT_2020 Cuda compilation tools, release 11.1, V11.1.105 Build cuda_11.1.TC455_06.29190527_0
`[cloudam@master work]$ colabfold_batch --amber --templates --num-recycle 3 --use-gpu-relax test.fasta outputdir 2022-08-25 09:23:16,269 Running colabfold 1.3.0 (a2b37ccccaf8336b68f2d716a18224dcce926658) WARNING: You are welcome to use the default MSA server, however keep in mind that it's a limited shared resource only capable of processing a few thousand MSAs per day. Please submit jobs only from a single IP address. We reserve the right to limit access to the server case-by-case when usage exceeds fair use.
If you require more MSAs:
You can precompute all MSAs with colabfold_search
or
You can host your own API and pass it to --host-url
2022-08-25 09:23:40,026 Could not open font file /usr/share/fonts/google-noto-emoji/NotoColorEmoji.ttf: In FT2Font: Can not load face. Unknown file format.
2022-08-25 09:23:40,955 generated new fontManager
2022-08-25 09:23:56,439 Found 8 citations for tools or databases
2022-08-25 09:24:03,244 Query 1/1: sp_F6I457_AG11C_VITVI_Agamous-like_MADS-box_protein_AGL11_OS_Vitis_vinifera_OX_29760_GN_AGL11_PE_2_SV_1 (length 223)
COMPLETE: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 150/150 [elapsed: 00:03 remaining: 00:00]
2022-08-25 09:24:16,079 Sequence 0 found templates: ['4ox0_B', '4ox0_D', '1mnm_A', '1k6o_B', '1k6o_C', '6wc2_C', '6bz1_B', '1hbx_D', '1hbx_E', '1n6j_A', '1egw_C', '1egw_D', '3mu6_D', '7nb0_A', '7nb0_D']
2022-08-25 09:24:17,419 Running model_3
2022-08-25 09:26:20,511 model_3 took 118.6s (3 recycles) with pLDDT 82.6 and ptmscore 0.401
Traceback (most recent call last):
File "/public/software/.local/easybuild/software/ColabFold/colabfold_batch/colabfold-conda/bin/colabfold_batch", line 8, in
Does it work well if you removed --use-gpu-relax
?
如果您删除它是否可以正常工作
--use-gpu-relax
?
After removing it, it can run normally. The output is normal.
I remove --use-gpu-relax, but the error still occurred.
nvidia-smi -L GPU 0: NVIDIA A40 (UUID: GPU-3a0904e1-20fe-becb-1241-e7f57ae80815)
nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Mar_21_19:15:46_PDT_2021 Cuda compilation tools, release 11.3, V11.3.58 Build cuda_11.3.r11.3/compiler.29745058_0
Same issue here on M1 Mac. In my case I do not have a nvidia card of course 😅
File "/Users/xxxxx/opt/ColabFold/colabfold_batch/colabfold-conda/lib/python3.8/site-packages/alphafold/relax/amber_minimize.py", line 419, in _run_one_iteration
raise ValueError(f"Minimization failed after {max_attempts} attempts.")
ValueError: Minimization failed after 100 attempts
This bug was fidex?, I also have same bug, not I'm sure as fix it.
I freshly made the installer for ColabFold > 1.5.0 on Feb 6. Please try the installation again after removing the old one, and report whether the bug is fixed or not.
ok! I'll do this :-)
[EDIT 1]
Just to let you know I encounter this issue when using the --cpu
argument:
(base) Egon@MacBook ColabFold % colabfold_batch --amber --templates --num-recycle 3 --use-gpu-relax /Users/Egon/opt/ColabFold/Predictions/Source/A0AK37.fasta /Users/Egon/opt/ColabFold/Predictions/Destination/ --cpu
usage: colabfold_batch [-h] [--stop-at-score STOP_AT_SCORE] [--num-recycle NUM_RECYCLE]
[--recycle-early-stop-tolerance RECYCLE_EARLY_STOP_TOLERANCE]
[--num-ensemble NUM_ENSEMBLE] [--num-seeds NUM_SEEDS] [--random-seed RANDOM_SEED]
[--num-models {1,2,3,4,5}] [--recompile-padding RECOMPILE_PADDING]
[--model-order MODEL_ORDER] [--host-url HOST_URL] [--data DATA]
[--msa-mode {mmseqs2_uniref_env,mmseqs2_uniref,single_sequence}]
[--model-type {auto,alphafold2_ptm,alphafold2_multimer_v1,alphafold2_multimer_v2,alphafold2_multimer_v3}]
[--amber] [--num-relax NUM_RELAX] [--templates]
[--custom-template-path CUSTOM_TEMPLATE_PATH]
[--rank {auto,plddt,ptm,iptm,multimer}]
[--pair-mode {unpaired,paired,unpaired_paired}]
[--sort-queries-by {none,length,random}] [--save-single-representations]
[--save-pair-representations] [--use-dropout] [--max-seq MAX_SEQ]
[--max-extra-seq MAX_EXTRA_SEQ] [--max-msa MAX_MSA] [--disable-cluster-profile]
[--zip] [--use-gpu-relax] [--save-all] [--save-recycles]
[--overwrite-existing-results]
input results
colabfold_batch: error: unrecognized arguments: --cpu
Maybe I should also remove the --use-gpu-relax argument if I want to use only the CPU?
[EDIT 2] In my case no way to make it work with the standard command or with this one:
(base) Egon@MacBook ColabFold % colabfold_batch --amber --templates --num-recycle 3 /Users/Egon/opt/ColabFold/Predictions/Source/A0AK37.fasta /Users/Egon/opt/ColabFold/Predictions/Destination/
2023-02-17 14:16:52,939 Running colabfold 1.5.1 (b4c1bc7cf89bc0bd577c5a9d3c1f7bedc1f74152)
WARNING: You are welcome to use the default MSA server, however keep in mind that it's a
limited shared resource only capable of processing a few thousand MSAs per day. Please
submit jobs only from a single IP address. We reserve the right to limit access to the
server case-by-case when usage exceeds fair use. If you require more MSAs: You can
precompute all MSAs with `colabfold_search` or host your own API and pass it to `--host-url`
2023-02-17 14:16:52,982 WARNING: no GPU detected, will be using CPU
Traceback (most recent call last):
File "/Users/Egon/opt/ColabFold/colabfold_batch/colabfold-conda/bin/colabfold_batch", line 8, in <module>
sys.exit(main())
File "/Users/Egon/opt/ColabFold/colabfold_batch/colabfold-conda/lib/python3.8/site-packages/colabfold/batch.py", line 1778, in main
run(
File "/Users/Egon/opt/ColabFold/colabfold_batch/colabfold-conda/lib/python3.8/site-packages/colabfold/batch.py", line 1215, in run
from colabfold.alphafold.models import load_models_and_params
File "/Users/Egon/opt/ColabFold/colabfold_batch/colabfold-conda/lib/python3.8/site-packages/colabfold/alphafold/models.py", line 5, in <module>
from alphafold.model import model, config, data
File "/Users/Egon/opt/ColabFold/colabfold_batch/colabfold-conda/lib/python3.8/site-packages/alphafold/model/model.py", line 20, in <module>
from alphafold.model import features
File "/Users/Egon/opt/ColabFold/colabfold_batch/colabfold-conda/lib/python3.8/site-packages/alphafold/model/features.py", line 19, in <module>
from alphafold.model.tf import input_pipeline
File "/Users/Egon/opt/ColabFold/colabfold_batch/colabfold-conda/lib/python3.8/site-packages/alphafold/model/tf/input_pipeline.py", line 17, in <module>
from alphafold.model.tf import data_transforms
File "/Users/Egon/opt/ColabFold/colabfold_batch/colabfold-conda/lib/python3.8/site-packages/alphafold/model/tf/data_transforms.py", line 18, in <module>
from alphafold.model.tf import shape_helpers
File "/Users/Egon/opt/ColabFold/colabfold_batch/colabfold-conda/lib/python3.8/site-packages/alphafold/model/tf/shape_helpers.py", line 16, in <module>
import tensorflow.compat.v1 as tf
ModuleNotFoundError: No module named 'tensorflow'
I tried reinstalling all required dependencies, as well as Python@3.8 using brew but same issue. :-(
Maybe I should also remove the --use-gpu-relax argument if I want to use only the CPU?
Yes, --use-gpu-relax
is not available on macOS since it has no NVIDIA GPUs. Moreover, the --cpu
arg is now obsoleted from ColabFold 1.5.0.
I tried reinstalling all required dependencies, as well as Python@3.8 using brew but same issue. :-(
Hmm, it seems tensorflow-cpu is not installed on your mac for some reason. Is your mac M1? Now I've tested on my fresh Intel Mac with ColabFold 1.5.2, which is upgraded yesterday, and no errors detected. Since the implementation of the dependency has been updated, your issue may be solved with the latest version.
This error occurs to me only when I run the code in a cluster. Weird.. I notice that the outputs are not reranked so i guess it failed to try to relax the 1st rank model. Here is the command I used:
colabfold_batch --amber --num-relax 1 --use-gpu-relax --num-recycle 6 --zip --templates input_dir output_dir
I found a solution. Just remove --use-gpu-relax
and it works!!
Hi, I am struggling with the same problem. In my case, analyzing any protein always causes this problem. Even though removing --use-gpu-relax can solve this problem, I still want to use GPU to make relaxation quicker. Here is my environment and problem. FYI, the use of Ubuntu 22.04 also caused the same problem.
windows version Windows 11 Pro 22H2 2023/02/24 22621.1485 Windows Feature Experience Pack 1000.22639.1000.0
uname -a Linux username 5.15.90.1-microsoft-standard-WSL2 #1 SMP Fri Jan 27 02:56:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
lsb_release -a LSB Version: core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal
nvidia-smi Wed Apr 12 08:54:35 2023 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 530.41.03 Driver Version: 531.41 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 4080 On | 00000000:01:00.0 On | N/A | | 0% 38C P8 12W / 320W| 2012MiB / 16376MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 22 G /Xwayland N/A | +---------------------------------------------------------------------------------------+
nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Tue_Feb__7_19:32:13_PST_2023 Cuda compilation tools, release 12.1, V12.1.66 Build cuda_12.1.r12.1/compiler.32415258_0
$ colabfold_batch --templates --amber --use-gpu-relax --num-recycle 3 localcolabfold/fasta/GFP.fasta localcolabfold/fasta/gfp2 2023-04-12 09:07:49,261 Running colabfold 1.5.2 (09ba35598fb7224180a5c0336cd3d378a33817e1) 2023-04-12 09:07:50,524 Running on GPU 2023-04-12 09:07:50,752 Found 7 citations for tools or databases 2023-04-12 09:07:50,752 Query 1/1: GFPmonomer (length 238) COMPLETE: 100%|██████████████████████████████████████████████████████| 150/150 [elapsed: 00:03 remaining: 00:00] 2023-04-12 09:08:24,785 Sequence 0 found templates: ['7bwn_C', '7bwn_M'] 2023-04-12 09:08:24,806 Setting max_seq=512, max_extra_seq=194 2023-04-12 09:08:43,397 alphafold2_ptm_model_1_seed_000 recycle=0 pLDDT=97.5 pTM=0.922 2023-04-12 09:08:45,817 alphafold2_ptm_model_1_seed_000 recycle=1 pLDDT=97.6 pTM=0.924 tol=0.184 2023-04-12 09:08:48,148 alphafold2_ptm_model_1_seed_000 recycle=2 pLDDT=97.6 pTM=0.922 tol=0.0587 2023-04-12 09:08:50,520 alphafold2_ptm_model_1_seed_000 recycle=3 pLDDT=97.6 pTM=0.922 tol=0.0171 2023-04-12 09:08:50,520 alphafold2_ptm_model_1_seed_000 took 21.2s (3 recycles) 2023-04-12 09:08:52,905 alphafold2_ptm_model_2_seed_000 recycle=0 pLDDT=98 pTM=0.931 2023-04-12 09:08:55,306 alphafold2_ptm_model_2_seed_000 recycle=1 pLDDT=98.1 pTM=0.935 tol=0.156 2023-04-12 09:08:57,703 alphafold2_ptm_model_2_seed_000 recycle=2 pLDDT=98.1 pTM=0.934 tol=0.033 2023-04-12 09:09:00,121 alphafold2_ptm_model_2_seed_000 recycle=3 pLDDT=98 pTM=0.933 tol=0.0218 2023-04-12 09:09:00,121 alphafold2_ptm_model_2_seed_000 took 9.6s (3 recycles) 2023-04-12 09:09:08,754 alphafold2_ptm_model_3_seed_000 recycle=0 pLDDT=93.9 pTM=0.887 2023-04-12 09:09:11,175 alphafold2_ptm_model_3_seed_000 recycle=1 pLDDT=95.2 pTM=0.899 tol=1.17 2023-04-12 09:09:13,600 alphafold2_ptm_model_3_seed_000 recycle=2 pLDDT=95.9 pTM=0.901 tol=0.843 2023-04-12 09:09:15,953 alphafold2_ptm_model_3_seed_000 recycle=3 pLDDT=95.8 pTM=0.899 tol=0.108 2023-04-12 09:09:15,953 alphafold2_ptm_model_3_seed_000 took 15.8s (3 recycles) 2023-04-12 09:09:18,404 alphafold2_ptm_model_4_seed_000 recycle=0 pLDDT=93.9 pTM=0.886 2023-04-12 09:09:20,799 alphafold2_ptm_model_4_seed_000 recycle=1 pLDDT=95.7 pTM=0.901 tol=1.09 2023-04-12 09:09:23,189 alphafold2_ptm_model_4_seed_000 recycle=2 pLDDT=95.9 pTM=0.901 tol=0.807 2023-04-12 09:09:25,639 alphafold2_ptm_model_4_seed_000 recycle=3 pLDDT=96 pTM=0.903 tol=0.25 2023-04-12 09:09:25,640 alphafold2_ptm_model_4_seed_000 took 9.7s (3 recycles) 2023-04-12 09:09:28,079 alphafold2_ptm_model_5_seed_000 recycle=0 pLDDT=94.1 pTM=0.89 2023-04-12 09:09:30,465 alphafold2_ptm_model_5_seed_000 recycle=1 pLDDT=96.4 pTM=0.911 tol=0.811 2023-04-12 09:09:33,382 alphafold2_ptm_model_5_seed_000 recycle=2 pLDDT=95.9 pTM=0.908 tol=0.171 2023-04-12 09:09:36,547 alphafold2_ptm_model_5_seed_000 recycle=3 pLDDT=95.9 pTM=0.905 tol=0.665 2023-04-12 09:09:36,547 alphafold2_ptm_model_5_seed_000 took 10.9s (3 recycles) 2023-04-12 09:09:36,579 reranking models by 'plddt' metric Traceback (most recent call last): File "/home/username/./localcolabfold/colabfold-conda/bin/colabfold_batch", line 8, in <module> sys.exit(main()) File "/home/username/localcolabfold/colabfold-conda/lib/python3.9/site-packages/colabfold/batch.py", line 1811, in main run( File "/home/username/localcolabfold/colabfold-conda/lib/python3.9/site-packages/colabfold/batch.py", line 1489, in run results = predict_structure( File "/home/username/localcolabfold/colabfold-conda/lib/python3.9/site-packages/colabfold/batch.py", line 573, in predict_structure pdb_lines = relax_me(pdb_lines=unrelaxed_pdb_lines[key], use_gpu=use_gpu_relax) File "/home/username/localcolabfold/colabfold-conda/lib/python3.9/site-packages/colabfold/batch.py", line 359, in relax_me relaxed_pdb_lines, _, _ = amber_relaxer.process(prot=pdb_obj) File "/home/username/localcolabfold/colabfold-conda/lib/python3.9/site-packages/alphafold/relax/relax.py", line 62, in process out = amber_minimize.run_pipeline( File "/home/username/localcolabfold/colabfold-conda/lib/python3.9/site-packages/alphafold/relax/amber_minimize.py", line 476, in run_pipeline ret = _run_one_iteration( File "/home/username/localcolabfold/colabfold-conda/lib/python3.9/site-packages/alphafold/relax/amber_minimize.py", line 420, in _run_one_iteration raise ValueError(f"Minimization failed after {max_attempts} attempts.") ValueError: Minimization failed after 100 attempts.
Hi,
Has there been any progress on this issue? I'm running into the same error. I did a fresh install yesterday (12/12/2023) using the bash script from the github and still ran into the same issues.
For me, the issues gets resolved with openmm==7.7.0 (mamba install openmm==7.7.0 -y)
For me, the issues gets resolved with openmm==7.7.0 (mamba install openmm==7.7.0 -y)
Very much thanks! This does help me!
Recently, this problem has occurred in all runs