Open cszer opened 4 years ago
I tested models on 224х224 torch.rand tensor
Hello, thanks for testing it. Please note that this is a re-implementation and we haven't tried to match the inference time as the original code.
I think the main mismatch comes from implementation differences, possibly the positional encoding or batch normalization. In addition, I think the runtime won't increase much if you test it on L models -- we only change the width of the model and keep the depth the same, while ResNet doubles the depth.
@csrhddlam Why don't you share the original TensorFlow implementation but a PyTorch re-implementation instead? I'm a bit confused...
It would be almost impossible to release the original code in tensorflow (which runs on TPU), because it is Google property, and it depends on some other packages which are also Google property, e.g. stand-alone self-attention and panoptic-deeplab.
@csrhddlam Hmm... Has Google recently changed their policy? They used to release TensorFlow code for their published papers...
No, as far as I know. And sorry for the confusion. As I said, our original code depends heavily on stand-alone self-attention and panoptic-deeplab. However, they have not released their code and we are not authorized to release their code, so we cannot release our original code. Instead of waiting for their releases, we re-implement the work here in PyTorch to let the community access most details of our work as soon as possible.
@csrhddlam I see. Thanks for the reply! Good work by the way. Congratulations!
Just investigated the inference time a bit. Here is my trace on a GPU. I tested it with both pytorch 1.1 and 1.6, and found similar results. You can see that the relative positional embedding is taking way more time than a convolution. In addition, reshaping, squeezing, and permutating are also taking way more time than bmm, where actual computation happens.
There is much room to optimize the code in this repo. Even the original TF code was optimized for TPU and tested directly on GPU. So we would expect that the inference time could be improved a lot when it is well optimized.
einsum
seems to have some performance issues, so maybe directly using bmm
could be faster.
Thanks for the pointer. I wasn't aware of the issue.
einsum
in PyTorch looks less optimized than einsum_v2
in TensorFlow. And I agree that directly using bmm
, together with some smart permute
and view
could be faster in PyTorch.
Hello , i tested inference speed and compared it with simple torchvision resnet50 . I used 2080ti and pytorch 1.4 Results are : torchvision resnet50 - 13-15 ms axial-resnet-s - 79-81ms But in the paper authors show that inference speed of L model is comparable with Resnet101