-
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no si…
-
With the release of the new [Mistral NeMo 12B model](https://mistral.ai/news/mistral-nemo/) we now have weights that were pre-trained with FP8. It would be great if Unsloth could support 8bit as well …
-
Hey Andre, I had a doubt regarding Model Conversion. I am trying to use Mobile net trained model, which is already based on Tensorflow, Do I need to undergo model conversion for that? I dont think I s…
-
Hello All,
I have been saving llama3 in gguf for weeks and was working fine.
Only today, I started getting the error, I tried everything including the suggestion git clone and make clean / make al…
-
### Search before asking
- [X] I have searched the HUB [issues](https://github.com/ultralytics/hub/issues) and found no similar bug report.
### HUB Component
Export
### Bug
I trained an Object D…
-
## ❓ Questions and Help
Hello,
Great paper! kudos!
After reading I was wondering if it is possible to use these quantization methods on trained model using one of huggingface transformers or shal…
-
Hello,
I am currently playing the the unsloth library and its performing amazingly, even on my local machine. Unfortunately, I have an issue with the model kind of "forgetting" its generic purpose…
-
### Describe the issue
I haved a pre-trained CNN model of tensorflow saved model and I convert it to **.onnx form** as well as a **static quantized .onnx form**, and their inference latency at the…
vonJJ updated
3 weeks ago
-
**Describe the bug**
Unable to load the saved model after applying quantization aware training.
**System information**
TensorFlow version (installed from source or binary): 2.2
TensorFlow Mode…
-
Prior to filing: check that this should be a bug instead of a feature request. Everything supported, including the compatible versions of TensorFlow, is listed in the overview page of each technique. …