Open qiulinzhang opened 4 years ago
Hi, thanks for you comments! 1) You're right, that's a mistake, should indeed by 0.5 instead of 0. Thanks for pointing out. 2) The snippet you posted is the code used during training (and the code part of Fig. 2). During inference, no sigmoid is used and the threshold with 0 is used instead (therefore the mistake in Fig. 2). Thats the early return in the Gumbel forward: https://github.com/thomasverelst/dynconv/blob/be1024caacec19d6a36ba99295b8dd318d5298bb/classification/dynconv/maskunit.py#L69-L71
I got it. Thanks for your reply.
By the way, If training on 1080Ti, 2080Ti or V100, why adopt inference comparison on one 1050Ti? I am a little curious. because, recently, it is difficult to find an 1050Ti GPU on deep learning field. or this is an example for low-computation devices like mobile phone?
I only have 1050 Ti in my working machine (The more powerful GPUs are in the servers). I also intended this method for low-computation devices indeed (mobile or laptops). I don't think it makes much sense to use it on very powerful GPUs, since overhead becomes a much more important factor there in order to fully utilize the GPUs. If I ever get my hands on a NVIDIA Jetson, I'd like to check the performance there.
Thanks for your patient reply. in your reply,
"I don't think it makes much sense to use it on very powerful GPUs, since overhead becomes a much more important factor there in order to fully utilize the GPUs."
which means if we infer on 2080Ti or V100, we will not get as high speedup ratio as that on 1050Ti (60% speedup) ?
which means if we infer on 2080Ti or V100, we will not get as high speedup ratio as that on 1050Ti (60% speedup) ?
I tried it now on a 1080 Ti and with larger batchsize (128) it seems ok. But still, this work is experimental and limited to depthwise convolutions for now (e.g. as in MobileNetV2). In practice, the accuracy-speed ratio of MobileNetV2 on powerful GPUs is barely better than a standard ResNet. Also, this work is not compatible with TensorRT and that would probably give better/more consistent speedup. So this is more a proof of concept than a production-ready work, ideally it'd need to be better integrated into low-level CUDA libraries for better support.
(command used: python -O tools/speedtest.py --cfg experiments/4stack/s025.yaml TEST.BATCH_SIZE_PER_GPU 128
)
Great work!!! your patient reply helps me a lot to understand your paper and novel idea!
with a more powerful 1080Ti, the results shows 60% speedup (batch 32) and 96% speedup (batch 128) respectively. so can we say it also makes sense on powerful gpu? another question is about the table results, baseline row, for 1080Ti, both batch 32 and 128 obtain 100 images/second?
one filed I think this novel idea can be used is what you said on "Conclusion and future work" part, high-resolution images might be much faster.
Insightful work!!! During the study of your paper, I have some questions (My English is not very good, and I am not aggressive, just some confusion):
The first problem is Figure 2. After Sigmoid, everything should be >= 0, but the figure still use threshold 0 to make decision. From the paper, I think there should be 0.5; or No Sigmoid used.
The second problem is about the code.
However, the paper said, "Note that this formulation has no logarithms or exponentials in the forward pass, typically expensive computations on hardware platforms" So in the code, why not just use soft >= 0, and no sigmoid operation.
Thanks for your kind help!