csrhddlam / axial-deeplab

This is a PyTorch re-implementation of Axial-DeepLab (ECCV 2020 Spotlight)
https://arxiv.org/abs/2003.07853
Apache License 2.0
450 stars 69 forks source link

Strange runtime results #2

Open cszer opened 4 years ago

cszer commented 4 years ago

Hello , i tested inference speed and compared it with simple torchvision resnet50 . I used 2080ti and pytorch 1.4 Results are : torchvision resnet50 - 13-15 ms axial-resnet-s - 79-81ms But in the paper authors show that inference speed of L model is comparable with Resnet101

cszer commented 4 years ago

I tested models on 224х224 torch.rand tensor

csrhddlam commented 4 years ago

Hello, thanks for testing it. Please note that this is a re-implementation and we haven't tried to match the inference time as the original code.

I think the main mismatch comes from implementation differences, possibly the positional encoding or batch normalization. In addition, I think the runtime won't increase much if you test it on L models -- we only change the width of the model and keep the depth the same, while ResNet doubles the depth.

netw0rkf10w commented 4 years ago

@csrhddlam Why don't you share the original TensorFlow implementation but a PyTorch re-implementation instead? I'm a bit confused...

csrhddlam commented 4 years ago

It would be almost impossible to release the original code in tensorflow (which runs on TPU), because it is Google property, and it depends on some other packages which are also Google property, e.g. stand-alone self-attention and panoptic-deeplab.

netw0rkf10w commented 4 years ago

@csrhddlam Hmm... Has Google recently changed their policy? They used to release TensorFlow code for their published papers...

csrhddlam commented 4 years ago

No, as far as I know. And sorry for the confusion. As I said, our original code depends heavily on stand-alone self-attention and panoptic-deeplab. However, they have not released their code and we are not authorized to release their code, so we cannot release our original code. Instead of waiting for their releases, we re-implement the work here in PyTorch to let the community access most details of our work as soon as possible.

netw0rkf10w commented 4 years ago

@csrhddlam I see. Thanks for the reply! Good work by the way. Congratulations!

csrhddlam commented 4 years ago

Just investigated the inference time a bit. Here is my trace on a GPU. I tested it with both pytorch 1.1 and 1.6, and found similar results. image You can see that the relative positional embedding is taking way more time than a convolution. image In addition, reshaping, squeezing, and permutating are also taking way more time than bmm, where actual computation happens.

There is much room to optimize the code in this repo. Even the original TF code was optimized for TPU and tested directly on GPU. So we would expect that the inference time could be improved a lot when it is well optimized.

netw0rkf10w commented 4 years ago

einsum seems to have some performance issues, so maybe directly using bmm could be faster.

csrhddlam commented 4 years ago

Thanks for the pointer. I wasn't aware of the issue.

einsum in PyTorch looks less optimized than einsum_v2 in TensorFlow. And I agree that directly using bmm, together with some smart permute and view could be faster in PyTorch.