Closed GELIELEO closed 2 years ago
I used this command to train the baseline model:
CUDA_VISIBLE_DEVICES=0,1 python main.py /home/user/ /home/user/ /home/user/kinetics-pt/ --MODE=train --ARCH=resnet50 --MODEL_TYPE=I3D --DATASET=road --TRAIN_SUBSETS=train_3 --SEQ_LEN=8 --TEST_SEQ_LEN=8 --BATCH_SIZE=4 --LR=0.0041
and I have 2 GTX2080ti in my computer, but it show Out of CUDA.
actually, the README.md tell "each with at least 10GB VRAM", why my 2080ti is not enough?
PS: there is no any other process running on the GPUs.
With a given sequence length, you can only fit one example per GPU, try a batch size of 2. and reduce the learning rate to 0.002 or 0.0015 if first one results in nan loss values.
I used this command to train the baseline model:
CUDA_VISIBLE_DEVICES=0,1 python main.py /home/user/ /home/user/ /home/user/kinetics-pt/ --MODE=train --ARCH=resnet50 --MODEL_TYPE=I3D --DATASET=road --TRAIN_SUBSETS=train_3 --SEQ_LEN=8 --TEST_SEQ_LEN=8 --BATCH_SIZE=4 --LR=0.0041
and I have 2 GTX2080ti in my computer, but it show Out of CUDA.
actually, the README.md tell "each with at least 10GB VRAM", why my 2080ti is not enough?
PS: there is no any other process running on the GPUs.