PeterL1n / BackgroundMattingV2

Real-Time High-Resolution Background Matting
MIT License
6.85k stars 952 forks source link

High CPU usage in Webcam inference demo #53

Open leavittx opened 3 years ago

leavittx commented 3 years ago

I've tried the webcam inference demo, and it runs ~30fps on 640x360 resolution on my laptop's Nvidia GTX1050, which is really neat! However the CPU usage is 60-80%, while GPU utilization according to the task manager is only 6-10%. Is that something specific to how the python demo works - i.e. it shouldn't be CPU-intensive at all if properly used in C++ (torchscript)? Really wonder why the GPU usage is that low. Thank you

taaha827 commented 3 years ago

Can I know the steps through which you ran the code

subDesTagesMitExtraKaese commented 3 years ago

you can replace return ToTensor()(Image.fromarray(frame)).unsqueeze_(0).cuda() with

datatype = torch.float32
    pic = Image.fromarray(frame)
    img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
    img = img.cuda()
    img = img.view(pic.size[1], pic.size[0], len(pic.getbands()))
    img = img.permute((2, 0, 1)).contiguous()
    tmp = img.to(dtype=datatype).div(255)
    tmp.unsqueeze_(0)
    tmp = tmp.to(datatype)
    return tmp

to reduce CPU usage on image preprocessing. ToTensor seems to eat CPU regardless of framerate or image size.