-
Hi, i am unable to find the parse-bbox-func-name=NvDsInferParseCustomRetinaface
custom-lib-path=/opt/models/retinaface/nvdsinfer_customparser/libnvdsinfer_custom_impl_retinaface.so
-
As discussed in [http://forums.java.net/jive/thread.jspa?threadID=17972](http://forums.java.net/jive/thread.jspa?threadID=17972) .
XmlAdapter should be extended to support inference of the component …
-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-mmlab/mmcv/issues) and [Discussions](https://github.com/open-mmlab/mmcv/discussions) but cannot get the expected help.
- [X]…
-
### Describe the issue
I was trying to get inference for stable diffusion onnx {CompVis/stable-diffusion-v1-4} model, I got stuck at this error
`image = pipe(prompt).images[0]
File "onnx_openvpy3…
-
### 1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): ubuntu 22
- TensorFlow installation (pip package or built from source): `pip install tf-nightly` where python i…
-
First a great thank you for the great stuff you provide. I just recently started to work with RV and do digital logic since 1996.
While bringing a mini implementation up with all internal memories …
-
Hi team,
I followed the steps on the article and run the model in inf2.
And the inference is not working I am always getting grpc error as below when I run the inference model server:
python3 te…
-
add a section about testing llms, this is crucial
-
@tyagi-iiitv Did your pretrained model reached the performance according to PointPillar paper?
-
### 🐛 Describe the bug
Note :
I know that bfloat16 should obviously not be used on a CPU model.
Maybe it's a better practice to do `to(self.device).to(bfloat16)` than `.to(bfloat16).to(self.devi…