Closed sophia-wright-blue closed 6 years ago
I trained my model on a single Nvidia P100 GPU. As for the training details, please refer to our paper.
your paper is extremely interesting! on page 7, under the intro paragraph of Section 4 "Experimental Results", you mention that "Additional results including detailed class-wise performance and error diagnosis can be found in the supplementary material"
where is the supplementary material available? I don't see it at the end of the paper.
Thanks,
closing as the same question has been asked here - https://github.com/vt-vl-lab/iCAN/issues/12
I have a same problem. would it be possible to train the model on multiple GPUs? what changes would I have to make? I trained my model on a single TITAN X GPU for HICO-DET dataset. It took about five days for iteration 1800000 times.
last question answered here https://github.com/vt-vl-lab/iCAN/issues/25
you mention that you developed the model using CUDA 8.0.
I have a few questions about GPU training:
how many GPUs did you use to train the model? (for e.g python tools/Train_ResNet_VCOCO.py --model iCAN_ResNet50_VCOCO --num_iteration 300000)
which type of GPU?
approx how long did the training take?
would it be possible to train the model on multiple GPUs? what changes would I have to make?
thank you,