-
## Describe the bug
On mac M3 max facing the below errors on running llama 3.2 vision 11 B.
**CPU:**
```
mistral.rs % cargo run --release -- -i --isq Q4K vision-plain -m lamm-mit/Cephalo-Llama-3…
-
### Model Requests
https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
and 11b
### Which formats?
- [X] GGUF (llama.cpp)
- [ ] TensorRT (TensorRT-LLM)
- [X] ONNX (Onnx Runtime)
-
getting this error when running....
any experience with this?
```
MaraScottMcBoatyUpscalerRefiner_v5
Failed to import transformers.models.conditional_detr.configuration_conditional_detr because…
-
> Please 👍 this feature request if you want chatgpt-shell to support different models (see [parent feature request](https://github.com/xenodium/chatgpt-shell/issues/244)). Also consider [sponsoring](h…
-
### 🚀 The feature, motivation and pitch
`MllamaForConditionalGeneration` models (such as, `meta-llama/Llama-3.2-90B-Vision-Instruct`, `meta-llama/Llama-3.2-11B-Vision`, etc.) are composed of `MllamaV…
-
This is a ticket to track a wishlist of items you wish LiteLLM had.
# **COMMENT BELOW 👇**
### With your request 🔥 - if we have any questions, we'll follow up in comments / via DMs
Respond …
-
❯ git clone https://github.com/sigoden/llm-functions
Cloning into 'llm-functions'...
remote: Enumerating objects: 763, done.
remote: Counting objects: 100% (366/366), done.
remote: Compressing obj…
-
Hi, I am trying to finetune LLaVA-NeXT with my custom dataset, using "finetune_clip.sh" shell file.
I gave some edits to the shell for my convenience and to satisfy my task so far, like this:
```
…
-
```
lm-format-enforcer==0.10.7
torch==2.4.1+cu121
transformers==4.45.0
```
When using the library together with the newly released Llama3.2-11B-Instruct we get a CUDA error.
```model_id = …
-
I was trying to generate image from text with ollama, but couldn't find tutorial for that.