NagatoYuki0943 / anomalib-tensorrt-cpp

24 stars 6 forks source link

请问efficient_ad模型推理速度为什么那么慢呢 #6

Closed hengyanchen closed 11 months ago

hengyanchen commented 11 months ago

3070卡 size:1280*1280 平均一张图200ms左右,为什么这么慢呢

NagatoYuki0943 commented 11 months ago

我没测试过分辨率这么高的图片,不清楚这么高分辨率是不是会大幅缩减速度,还有debug和release模式影响也很大

hengyanchen commented 11 months ago

我没测试过分辨率这么高的图片,不清楚这么高分辨率是不是会大幅缩减速度,还有debug和release模式影响也很大

是release的--!,size是256的话平均速度20ms左右,尺寸弄到1280就直接到200ms了,你觉得这个是否正常?

NagatoYuki0943 commented 11 months ago

我在efficient_ad的pytorch_model.py中计算了不同分辨率的参数量

if __name__ == "__main__":
    from torchsummary import summary
    model = PDN_S(out_channels=384)
    input_shape = (3, 256, 256)
    summary(model, input_shape, device="cpu")

    ## (3, 256, 256)
    # ================================================================
    # Total params: 2,694,144
    # Trainable params: 2,694,144
    # Non-trainable params: 0
    # ----------------------------------------------------------------
    # Input size (MB): 0.75
    # Forward/backward pass size (MB): 130.82
    # Params size (MB): 10.28
    # Estimated Total Size (MB): 141.84
    # ----------------------------------------------------------------

    ## (3, 1280, 1280)
    # ================================================================
    # Total params: 2,694,144
    # Trainable params: 2,694,144
    # Non-trainable params: 0
    # ----------------------------------------------------------------
    # Input size (MB): 18.75
    # Forward/backward pass size (MB): 3452.82
    # Params size (MB): 10.28
    # Estimated Total Size (MB): 3481.84
    # ----------------------------------------------------------------

1280的 Input size 是256的25倍,1280的 Forward/backward pass size 是256的26倍,计算量也应该翻了这么多倍,速度慢应该是正常的

hengyanchen commented 11 months ago

我在efficient_ad的pytorch_model.py中计算了不同分辨率的参数量

if __name__ == "__main__":
    from torchsummary import summary
    model = PDN_S(out_channels=384)
    input_shape = (3, 256, 256)
    summary(model, input_shape, device="cpu")

    ## (3, 256, 256)
    # ================================================================
    # Total params: 2,694,144
    # Trainable params: 2,694,144
    # Non-trainable params: 0
    # ----------------------------------------------------------------
    # Input size (MB): 0.75
    # Forward/backward pass size (MB): 130.82
    # Params size (MB): 10.28
    # Estimated Total Size (MB): 141.84
    # ----------------------------------------------------------------

    ## (3, 1280, 1280)
    # ================================================================
    # Total params: 2,694,144
    # Trainable params: 2,694,144
    # Non-trainable params: 0
    # ----------------------------------------------------------------
    # Input size (MB): 18.75
    # Forward/backward pass size (MB): 3452.82
    # Params size (MB): 10.28
    # Estimated Total Size (MB): 3481.84
    # ----------------------------------------------------------------

1280的 Input size 是256的25倍,1280的 Forward/backward pass size 是256的26倍,计算量也应该翻了这么多倍,速度慢应该是正常的

嗯嗯,官方论文里面速度很快

NagatoYuki0943 commented 11 months ago

我在efficient_ad的pytorch_model.py中计算了不同分辨率的参数量

if __name__ == "__main__":
    from torchsummary import summary
    model = PDN_S(out_channels=384)
    input_shape = (3, 256, 256)
    summary(model, input_shape, device="cpu")

    ## (3, 256, 256)
    # ================================================================
    # Total params: 2,694,144
    # Trainable params: 2,694,144
    # Non-trainable params: 0
    # ----------------------------------------------------------------
    # Input size (MB): 0.75
    # Forward/backward pass size (MB): 130.82
    # Params size (MB): 10.28
    # Estimated Total Size (MB): 141.84
    # ----------------------------------------------------------------

    ## (3, 1280, 1280)
    # ================================================================
    # Total params: 2,694,144
    # Trainable params: 2,694,144
    # Non-trainable params: 0
    # ----------------------------------------------------------------
    # Input size (MB): 18.75
    # Forward/backward pass size (MB): 3452.82
    # Params size (MB): 10.28
    # Estimated Total Size (MB): 3481.84
    # ----------------------------------------------------------------

1280的 Input size 是256的25倍,1280的 Forward/backward pass size 是256的26倍,计算量也应该翻了这么多倍,速度慢应该是正常的

嗯嗯,官方论文里面速度很快

我又测试了pytorch模型的速度,发现分辨率高了之后速度降低非常慢,1280分辨率的时间是256的22倍,我自己的理解是这个模型只会进行2次下采样,分辨率高了之后计算量增加非常大

def test_model_speed(model: nn.Module, input: Tensor, device="cuda:0", eval=False, repeats: int = 1000):
    from tqdm import tqdm
    import time
    model.to(device)
    input = input.to(device)
    if eval:
        model.eval()
    times = []
    for _ in tqdm(range(repeats)):
        if "cuda" in device:
            torch.cuda.synchronize()
        start = time.time()
        model(input)
        if "cuda" in device:
            torch.cuda.synchronize()
        times.append(time.time() - start)
    return sum(times) / repeats

if __name__ == "__main__":
    model = PDN_S(out_channels=384)
    # summary(model, (3, 256, 256))

    input = torch.ones(1, 3, 256, 256)
    print(model(input).shape)   # [1, 384, 56, 56]
    avg_time = test_model_speed(model, input, eval=True)
    print(avg_time, "second")
    # 256:  0.005538764715194702 second
    # 512:  0.01852963399887085 second
    # 1024: 0.08025343465805054 second
    # 1280: 0.12569849371910094 second

相比之下resnet18从224到1024时间只增加了3倍左右,resnet经过了5次下采样

    model = models.resnet18()
    x = torch.ones((1, 3, 224, 224))
    avg_time = test_model_speed(model, x, eval=True)
    print(avg_time, "second")
    # 224:  0.003574418306350708 second
    # 512:  0.004980533123016357 second
    # 1024: 0.012861243486404418 second

我能给出的一个想法是增加下采样的次数,可以在forward时多使用几次pool或者把几个conv的stride改为2试试;另一个想法是降低channel数量或者使用分组卷积,减少冗余。但我没法保证不报错,你可以自己试试,这个模型就是为低分辨率设置的,下采样次数太少了。

增加下采样我感觉要注意这里的代码,这里的pad应该是专门设置的,更改下采样倍率可能要重新设置

class EfficientAdModel(nn.Module):
    ...
    def forward(self, batch: Tensor, batch_imagenet: Tensor = None) -> Tensor | dict:
            ...
            if self.pad_maps:
                map_st = F.pad(map_st, (4, 4, 4, 4))
                map_stae = F.pad(map_stae, (4, 4, 4, 4))
            map_st = F.interpolate(map_st, size=(self.input_size[0], self.input_size[1]), mode="bilinear")
            map_stae = F.interpolate(map_stae, size=(self.input_size[0], self.input_size[1]), mode="bilinear")
            ...
hengyanchen commented 11 months ago

好的,非常感谢!!