thomasverelst / dynconv

Code for Dynamic Convolutions: Exploiting Spatial Sparsity for Faster Inference (CVPR2020)
https://arxiv.org/abs/1912.03203
126 stars 14 forks source link

Mask calculation #3

Open qiulinzhang opened 4 years ago

qiulinzhang commented 4 years ago

Insightful work!!! During the study of your paper, I have some questions (My English is not very good, and I am not aggressive, just some confusion):

  1. The first problem is Figure 2. After Sigmoid, everything should be >= 0, but the figure still use threshold 0 to make decision. From the paper, I think there should be 0.5; or No Sigmoid used.

  2. The second problem is about the code.

        if gumbel_noise:
            eps = self.eps
            U1, U2 = torch.rand_like(x), torch.rand_like(x)
            g1, g2 = -torch.log(-torch.log(U1 + eps)+eps), - \
                torch.log(-torch.log(U2 + eps)+eps)
            x = x + g1 - g2
    
        soft = torch.sigmoid(x / gumbel_temp)
        hard = ((soft >= 0.5).float() - soft).detach() + soft

    However, the paper said, "Note that this formulation has no logarithms or exponentials in the forward pass, typically expensive computations on hardware platforms" So in the code, why not just use soft >= 0, and no sigmoid operation.


Thanks for your kind help!

thomasverelst commented 4 years ago

Hi, thanks for you comments! 1) You're right, that's a mistake, should indeed by 0.5 instead of 0. Thanks for pointing out. 2) The snippet you posted is the code used during training (and the code part of Fig. 2). During inference, no sigmoid is used and the threshold with 0 is used instead (therefore the mistake in Fig. 2). Thats the early return in the Gumbel forward: https://github.com/thomasverelst/dynconv/blob/be1024caacec19d6a36ba99295b8dd318d5298bb/classification/dynconv/maskunit.py#L69-L71

qiulinzhang commented 4 years ago

I got it. Thanks for your reply.


By the way, If training on 1080Ti, 2080Ti or V100, why adopt inference comparison on one 1050Ti? I am a little curious. because, recently, it is difficult to find an 1050Ti GPU on deep learning field. or this is an example for low-computation devices like mobile phone?

thomasverelst commented 4 years ago

I only have 1050 Ti in my working machine (The more powerful GPUs are in the servers). I also intended this method for low-computation devices indeed (mobile or laptops). I don't think it makes much sense to use it on very powerful GPUs, since overhead becomes a much more important factor there in order to fully utilize the GPUs. If I ever get my hands on a NVIDIA Jetson, I'd like to check the performance there.

qiulinzhang commented 4 years ago

Thanks for your patient reply. in your reply,

"I don't think it makes much sense to use it on very powerful GPUs, since overhead becomes a much more important factor there in order to fully utilize the GPUs."

which means if we infer on 2080Ti or V100, we will not get as high speedup ratio as that on 1050Ti (60% speedup) ?

thomasverelst commented 4 years ago

which means if we infer on 2080Ti or V100, we will not get as high speedup ratio as that on 1050Ti (60% speedup) ?

I tried it now on a 1080 Ti and with larger batchsize (128) it seems ok. But still, this work is experimental and limited to depthwise convolutions for now (e.g. as in MobileNetV2). In practice, the accuracy-speed ratio of MobileNetV2 on powerful GPUs is barely better than a standard ResNet. Also, this work is not compatible with TensorRT and that would probably give better/more consistent speedup. So this is more a proof of concept than a production-ready work, ideally it'd need to be better integrated into low-level CUDA libraries for better support.

image

(command used: python -O tools/speedtest.py --cfg experiments/4stack/s025.yaml TEST.BATCH_SIZE_PER_GPU 128)

qiulinzhang commented 4 years ago

Great work!!! your patient reply helps me a lot to understand your paper and novel idea!


with a more powerful 1080Ti, the results shows 60% speedup (batch 32) and 96% speedup (batch 128) respectively. so can we say it also makes sense on powerful gpu? another question is about the table results, baseline row, for 1080Ti, both batch 32 and 128 obtain 100 images/second?


one filed I think this novel idea can be used is what you said on "Conclusion and future work" part, high-resolution images might be much faster.