Closed williamking5 closed 3 years ago
We only used 1 GPU for training. GPU 0,1: GeForce RTX 2080 Ti
only lists the GPUs available on the machine, but for the actual GPU(s) that we have used please refer to gpu_ids
. In this example, gpu_ids = [1]
, which means that we have used the GPU indexed as 1.
@gaotongxiao Thank you for explanation!
Hi, I'm checking the log file of FCE-Net training process, but I am confused of the config file of FCE-net-ctw1500.
According to your official log json in your documentation https://download.openmmlab.com/mmocr/textdet/fcenet/20210511_181328.log.json,
samples_per_gpu=6
,GPU 0, 1
and there are about 160 iterations each epoch. Hence the dataset size should be approximately6 * 2 * 160 = 1920
. But as we know, the size of training part of ctw-1500 dataset is 1000.The number of samples is 6 per GPU:
There are 2 GPU for training:
There are around 160 iterations each epoch:
I use the same config file to train on my own, finding the size is correct:
There are 2 GPU for training:
There are around 80 iterations each epoch:
The number of samples is 6 per GPU
Do you have any idea?