Closed COST-97 closed 4 years ago
If u only have one GPU, you should change the number of GPUS to 1 and set the learning rate to 0.005/4 = 0.00125 for training.
e.g.,
./tools/dist_train.sh configs/underwater/cas_r50/cascade_rcnn_r50_fpn_1x.py 1
./tools/dist_test.sh configs/underwater/cas_r50/cascade_rcnn_r50_fpn_1x.py workdirs/cascade_rcnn_r50_fpn_1x/latest.pth 1 --json_out results/cas_r50.json
Hello. When I run your code, the training time is only a few seconds, but the test time is one or two hours. I only have one GPU. So just change the number of GPUs from 4 to 1, the other parameters have not changed. I took a closer look at the configuration file cascade_rcnn_r50_fpn_1x and the training and test code. GPU is in use, and distributed training is also in use. Could you give me some suggestions? Thank you!