RL4M / KeAP

An official implementation of Protein Representation Learning via Knowledge Enhanced Primary Structure Reasoning
MIT License
23 stars 2 forks source link

CUDA out of memory #1

Closed Dreamer312 closed 1 year ago

Dreamer312 commented 1 year ago

Hi, appreciate your code.

I am trying to fine-tune on the contact task with your script

bash run_main.sh \
      --model /root/cmh/KeAP/pretrained_model/ \
      --output_file output/contact-KeAP20 \
      --task_name contact \
      --do_train True \
      --epoch 5 \
      --optimizer AdamW \
      --per_device_batch_size 1 \
      --gradient_accumulation_steps 8 \
      --eval_step 50 \
      --eval_batchsize 1 \
      --warmup_ratio 0.08 \
      --learning_rate 3e-5 \
      --seed 3 \
      --frozen_bert False

but the memory is not enough, my GPU is a 3090 with 24 GB memory.

Could you please tell me how what are your GPUs, or is there anything wrong on my code?

yunxiangfu2001 commented 1 year ago

Hi, thank you for your interest in our work.

We used Nvidia Tesla V100 SXM2-32GB GPU for pre-training and fine-tuning. The contact prediction task requires the most memory out of all downstream tasks and 24GB may not be enough.

Dreamer312 commented 1 year ago

Thanks!