Closed dpalbrecht closed 3 years ago
Hi, thanks for your attention. I think it is based on your CPU device. In my case, I tried to run the model with input frames of size 640*480 on the i7-4970MQ CPU. The FPS is higher than 20. However, I did not try to integrate the model with the browser. So I have no idea about it. BTW, I have also received rare reports ( #47 ) that the model got very low FPS on their CPU device, but I cannot reproduce the problem.
I saw that comment and it made me wonder whether I should even try it on my device, but I'm glad to hear you can get it to work yourself. 20 FPS might be enough. I tried to run the sample in Colab without a GPU but I get an error that there's no CUDA support. Can you explain what I need to change so that it runs?
If you want to run our matting Colab demo based on Google CPU, please try to delete all .cuda()
in the Colab demo. BTW, FPS of the online video matting Colab demo is very low because all frames need to be sent to Google's server and processed remotely.
Please try the offline demo to get higher FPS. And, waiting for your feedback of FPS on CPU.
I saw that comment and it made me wonder whether I should even try it on my device, but I'm glad to hear you can get it to work yourself. 20 FPS might be enough. I tried to run the sample in Colab without a GPU but I get an error that there's no CUDA support. Can you explain what I need to change so that it runs?
Hello, would you tell me the FPS of this model on the CPU? Thank you.
@LebronJames0423 We export the model to the ONNX format and call it by C++ with low resolution inputs. In this case, we got 15-20 FPS. However, if you call the model by using PyTorch, the FPS will be lower.
ok,thanks a lot.
------------------ 原始邮件 ------------------ 发件人: "ZHKKKe/MODNet" @.>; 发送时间: 2021年4月14日(星期三) 晚上10:45 @.>; @.**@.>; 主题: Re: [ZHKKKe/MODNet] Real-time inference in the browser without GPU? (#71)
@LebronJames0423 We export the model to the ONNX format and call it by C++ with low resolution inputs. In this case, we got 15-20 FPS. However, if you call the model by using PyTorch, the FPS will be lower.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
@LebronJames0423 We export the model to the ONNX format and call it by C++ with low resolution inputs. In this case, we got 15-20 FPS. However, if you call the model by using PyTorch, the FPS will be lower.
Hello, can you provide C++ code for inference?Thank you. @ZHKKKe
@StarkerRegen You may can refer this issue: #101
@StarkerRegen 您可以参考以下问题:#101
OK, thank you very much.
Hi, thanks for sharing your work! This is really great. I want to be able to use this for real-time inference in a browser without a GPU. Is this at all feasible for this model?