Urinx / alphafold_pytorch

An implementation of the DeepMind's AlphaFold based on PyTorch for research
Apache License 2.0
392 stars 92 forks source link

RuntimeError: expected device cuda:0 but got device cpu #1

Closed pentadotddot closed 4 years ago

pentadotddot commented 4 years ago

Hi!

I am trying using your great implementation of alphafold in pytorch. I am new to pytorch as I mostly used tensorflow and I ran into this problem:

python alphafold.py -i test_data/T0953s2/T0953s2.tfrec -o T0953s2_out -m tf_model_path/ Input file: test_data/T0953s2/T0953s2.tfrec Output dir: T0953s2_out/Distogram/0 Distogram model: tf_model_path/873731 Replica: 0 Device: cuda Model parameters: 21182345 Data: T0953s2-l128_s32 128 Traceback (most recent call last): File "alphafold.py", line 221, in run_eval(TARGET_PATH, MODEL_PATH, REPLICA, OUT_DIR, DEVICE) File "alphafold.py", line 68, in run_eval out = model(x_2d, crop_x, crop_y) File "/scratch/domeemod/anaconda3/envs/pytorch_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/mnt/meganeura/domeemod/alphafold-pytroch/alphafold_pytorch/network.py", line 295, in forward biases = self.build_crops_biases(config.position_specific_bias_size, self.position_specific_bias, crop_x, crop_y) File "/mnt/meganeura/domeemod/alphafold-pytroch/alphafold_pytorch/network.py", line 269, in build_crops_biases row_offsets = start_diag + increment RuntimeError: expected device cuda:0 but got device cpu

I found during some search on the net that this bug is related to how I am specifying the parts of the code to the cpu and the gpu. I have installed everything properly and I am using your provided scripts as it was written in your readme file.

We are doing research related to phylogenetics here in Hungary and your implementation can provide us some really good insights.

Thank you for uploading this project to github, and hope you can help us move forward.

Best wishes!

Kind regard, Demeter Márton

Urinx commented 4 years ago

Thanks for reporting this, now it's fixed and works fine on gpu.