tinyvision / SOLIDER-REID

MIT License
67 stars 12 forks source link

Is it possible to speed up inference and compare features ? #13

Open MyraBaba opened 1 year ago

MyraBaba commented 1 year ago

Hi

I have 2080RTX Ti and the below inference ~tooks 0,019 seconds after warmup . ~ 40 - 50 inference per second. Its look slow. How I can make it faster ? Is it due to model size ?

Also

@torch.no_grad()
def get_feature(img, model, device, normalize=False):
    input = val_transforms(img).unsqueeze(0)
    input = input.to(device)
                                                                                                       29,1          Top
    input = val_transforms(img).unsqueeze(0)
    input = input.to(device)
    output, _ = model(input)
    if normalize:
        output = F.normalize(output)
    return output

if device:
    if torch.cuda.device_count() > 1:
        print('Using {} GPUs for inference'.format(torch.cuda.device_count()))
    model = nn.DataParallel(model)
    model.to(device)

model.eval()

 elapsed_time = next(timer_gen)

    feature1 = get_feature(img1, model, device, normalize=True)
    elapsed_time = next(timer_gen) - elapsed_time
    print(elapsed_time)
cwhgn commented 1 year ago

We test SOLIDER on person re-identification task based on the code of TransReID. Currently, we do not pay much attention on the inference speed. You may refer to TransReID to find whether they have some teches to speed up your code. FYI, a small model size or a small input image size would be a possible solution.