-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and fou…
-
How about supporting ONNX in frugally? You could have a protobuf importer for ONNX models or add a tool which converts ONNX to the JSON format you use? Just a thought. A header only ONNX inference eng…
-
`2024-05-24 23:49:38 WARNING 05-24 15:49:38 utils.py:327] Not found nvcc in /usr/local/cuda. Skip cuda version check!
2024-05-24 23:49:38 INFO 05-24 15:49:38 config.py:379] Using fp8 data type to sto…
-
While parsing GAFs etc ontobio will construct expressions for each line and send them to an inference engine to determine if the annotations are (a) taxonomically invalid or otherwise logically incohe…
-
### 💡 Your Question
I'm trying to do inference with a trt YOLO-NAS-POSE model. I have exported the model to onnx like it shows on the website:
```export_result = yolo_nas_pose_s.export("yolo_nas…
-
**Describe the bug**
Hello. I was following the steps from this guide https://community.amd.com/t5/ai/how-to-running-optimized-llama2-with-microsoft-directml-on-amd/ba-p/645190
at the end of step …
-
1
-
I know you are busy with front end refactoring, but I was excited about the possibility to do inference on larger datasets in Lora and wanted to leave this on the queue. Also may be worth keeping in …
-
Originally reported by: **Claudiu Popa (BitBucket: [PCManticore](http://bitbucket.org/PCManticore), GitHub: @PCManticore?)**
---
We can do something similar for Generator nodes, as we do for Functio…
-
When I'm using "trtexec" to run the engine, the throughput is about 6 qps, but when I'm using my own python script, the throughput goes down to 3 qps, here's my code, please advice.
```
import numpy…