Open divyanshi00 opened 1 year ago
try to convert the model to Onnx or tensorlite for cpu inference.
On Mon, Dec 26, 2022 at 5:09 PM divyanshi00 @.***> wrote:
We are testing this model on cpu but the latency is bit longer than expected. We are testing images in a batch of 10 with which issue of latency is raised. If we increase the number of images in batch,the process is getting killed.
— Reply to this email directly, view it on GitHub https://github.com/xuebinqin/U-2-Net/issues/346, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORNKZHTYTGE4JRPBDRTWPFODTANCNFSM6AAAAAATJQCU5Y . You are receiving this because you are subscribed to this thread.Message ID: @.***>
-- Xuebin Qin PhD Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage: https://xuebinqin.github.io/
We are testing this model on cpu but the latency is bit longer than expected. We are testing images in a batch of 10 with which issue of latency is raised. If we increase the number of images in batch,the process is getting killed.