I was trying to find a suite batch size of CPU inference for my PC. I could run 1-10 batches, but I hope GPU could run it much more and faster... What do you think?
I used the default model, which should be a bit heavy model: wd-eval02-large-tagger-v3
Here's some benchmark result of CPU utilization of batch size 1-4, 10, and idle. I couldn't run 100.
Runtime was like 1:100%, 2: 80%, 3:60%, 4:60%, 10: 180%
I was trying to find a suite batch size of CPU inference for my PC. I could run 1-10 batches, but I hope GPU could run it much more and faster... What do you think?
I used the default model, which should be a bit heavy model: wd-eval02-large-tagger-v3
Here's some benchmark result of CPU utilization of batch size 1-4, 10, and idle. I couldn't run 100. Runtime was like 1:100%, 2: 80%, 3:60%, 4:60%, 10: 180%