-
GroundingDINO Inference result is very good. However, the inference speed is 5FPS,Is it possible to improve the inference speed by pre-encoded text ?
Looking forward to your reply!
-
## ❓ Questions and Help
We have trained a detection model and save as torchscript object, but when we do online inference by libtorch library, there are some strange error with cuda. The error is:"co…
-
Hi,
I am trying to run `object_detection.py` from `tftrt/examples/object_detection` but I get out-of-memory even on a powerful Nvidia RTX 2080 Ti (with 11GB memory). I tried with 3 different models…
-
Is it possible doing this? Having a few instances running on one GPU and another few running on a second?
Need to speed up detection's due to a lot of images...
Thank you
ghost updated
3 years ago
-
i run darknet in ubuntu and windows, with the same hardware, gpu is rtx2080ti. in ubuntu, video detection speed can reach over 35 fps, but in windows, it's only about 25 fps, how?
-
Hi I am testing the inference with:
RTX 3090
Cuda 11.2
paddlepaddle-gpu 2.1
Cudnn 8
And the performance for BlazeFace Detection is 20ms only. Is this normal?
-
Currently, Blueoil cannot use GPUs effectively at training phase. Especially, training of object detection network is not efficient, sometimes GPU usage is 20-30% at maximum. Though I don't have clear…
-
I am using the model `vehicle-detection-0202` in its openvino_ir format running in a OVMS environment, however when I try to send requests to the model running on a CPU I can get back the expected res…
lugi0 updated
8 months ago
-
## ❓ Questions and Help
Traceback (most recent call last):
File "tools/train_net.py", line 24, in
from maskrcnn_benchmark.modeling.detector import build_detection_model
File "/data/lost+f…
-
Found a bug? Please fill out the sections below. 👍
### Describe the bug
After creating an isolated environment in Anaconda using Python 3.11 (I think, whatever worked I used) and typing operat…