Closed woozch closed 6 years ago
We use one Titan X with 12GB memory. This memory should allow one batch (one target and one source) using the size defined in our code ((1280,720) and (1024,512)) for training.
Uhm.. It's weird... My environment settings are ubuntu 16.04 with pytorch0.4.1, and cuda92. You mentioned that your code is not compatible with pytorch 0.4. Maybe that the reason I guess?. It seems that I need to change pytorch version. Anyway, I am currently executing your code with the downsized inputs( by x0.875) of source and target. I am not sure that the performance of downsized inputs is the same as the reported result in your paper.
Thank you for replying. :)
We have not tried our code with python 0.4, so we are not sure how it will affect the result. But, the training scheme and back-propagation should be the same. You might also want to check the up-sampling part, where we also modify it in our evaluation code for pytorch 0.4.
Downsampling the input size may affect the performance a bit but you should still have a reasonable number.
I trained your multi-level model with the downsized inputs( by x0.875) of source and target. The mAP was 33%, which is 10% less than the reported score. It seems that the input size is important.
My environment settings are with pytorch0.2.0, and CUDA Version 9.0.176. TITAN12g, python2.7, but I encounter the Cuda runtime error. @wgchang, @wasidennis can I ask how do you resolve the problem of runtime error?
It seems that pytorch0.2.0 is not compatible with cuda 9.0.
Hello. I tried to execute your code, but it gives out of memory error. I am working with NVIDIA Titan Xp, which has 12GB memory capacity. It seems that the input sizes of the source and target images are quite big. ((1280,720) and (1024,512))
Can I ask which GPU devices you use for training?