ZHKKKe / MODNet

A Trimap-Free Portrait Matting Solution in Real Time [AAAI 2022]
Apache License 2.0
3.83k stars 637 forks source link

Real-time inference in the browser without GPU? #71

Closed dpalbrecht closed 3 years ago

dpalbrecht commented 3 years ago

Hi, thanks for sharing your work! This is really great. I want to be able to use this for real-time inference in a browser without a GPU. Is this at all feasible for this model?

ZHKKKe commented 3 years ago

Hi, thanks for your attention. I think it is based on your CPU device. In my case, I tried to run the model with input frames of size 640*480 on the i7-4970MQ CPU. The FPS is higher than 20. However, I did not try to integrate the model with the browser. So I have no idea about it. BTW, I have also received rare reports ( #47 ) that the model got very low FPS on their CPU device, but I cannot reproduce the problem.

dpalbrecht commented 3 years ago

I saw that comment and it made me wonder whether I should even try it on my device, but I'm glad to hear you can get it to work yourself. 20 FPS might be enough. I tried to run the sample in Colab without a GPU but I get an error that there's no CUDA support. Can you explain what I need to change so that it runs?

ZHKKKe commented 3 years ago

If you want to run our matting Colab demo based on Google CPU, please try to delete all .cuda() in the Colab demo. BTW, FPS of the online video matting Colab demo is very low because all frames need to be sent to Google's server and processed remotely.

Please try the offline demo to get higher FPS. And, waiting for your feedback of FPS on CPU.

LebronJames0423 commented 3 years ago

I saw that comment and it made me wonder whether I should even try it on my device, but I'm glad to hear you can get it to work yourself. 20 FPS might be enough. I tried to run the sample in Colab without a GPU but I get an error that there's no CUDA support. Can you explain what I need to change so that it runs?

Hello, would you tell me the FPS of this model on the CPU? Thank you.

ZHKKKe commented 3 years ago

@LebronJames0423 We export the model to the ONNX format and call it by C++ with low resolution inputs. In this case, we got 15-20 FPS. However, if you call the model by using PyTorch, the FPS will be lower.

LebronJames0423 commented 3 years ago

ok,thanks a lot.

------------------ 原始邮件 ------------------ 发件人: "ZHKKKe/MODNet" @.>; 发送时间: 2021年4月14日(星期三) 晚上10:45 @.>; @.**@.>; 主题: Re: [ZHKKKe/MODNet] Real-time inference in the browser without GPU? (#71)

@LebronJames0423 We export the model to the ONNX format and call it by C++ with low resolution inputs. In this case, we got 15-20 FPS. However, if you call the model by using PyTorch, the FPS will be lower.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

StarkerRegen commented 3 years ago

@LebronJames0423 We export the model to the ONNX format and call it by C++ with low resolution inputs. In this case, we got 15-20 FPS. However, if you call the model by using PyTorch, the FPS will be lower.

Hello, can you provide C++ code for inference?Thank you. @ZHKKKe

ZHKKKe commented 3 years ago

@StarkerRegen You may can refer this issue: #101

StarkerRegen commented 3 years ago

@StarkerRegen 您可以参考以下问题:#101

OK, thank you very much.