-
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no si…
-
Hey Andre, I had a doubt regarding Model Conversion. I am trying to use Mobile net trained model, which is already based on Tensorflow, Do I need to undergo model conversion for that? I dont think I s…
-
Hello All,
I have been saving llama3 in gguf for weeks and was working fine.
Only today, I started getting the error, I tried everything including the suggestion git clone and make clean / make al…
-
### Search before asking
- [X] I have searched the HUB [issues](https://github.com/ultralytics/hub/issues) and found no similar bug report.
### HUB Component
Export
### Bug
I trained an Object D…
-
## ❓ Questions and Help
Hello,
Great paper! kudos!
After reading I was wondering if it is possible to use these quantization methods on trained model using one of huggingface transformers or shal…
-
Hello,
I am currently playing the the unsloth library and its performing amazingly, even on my local machine. Unfortunately, I have an issue with the model kind of "forgetting" its generic purpose…
-
Hi,
I trained YOLOv8 model and exported the model to ONNX format by the quantization_recipe below, I set weight_bits=8 and activation_bits=8 to ensure the full-flow inference of quantized model is …
-
So, i was trying to run this in google colab:
```
!python /content/multi_token/scripts/serve_model.py \
--model_name_or_path mistralai/Mistral-7B-Instruct-v0.1 \
--model_lora_path sshh12/M…
-
### Describe the issue
I haved a pre-trained CNN model of tensorflow saved model and I convert it to **.onnx form** as well as a **static quantized .onnx form**, and their inference latency at the…
vonJJ updated
2 weeks ago
-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and fou…