-
Can you add Llama 3.2 9B in model library?
-
-
-
[comment]: # (If desired, delete this line and add an image of the board here)
## Basic information
- Board URL (official): https://www.apple.com/mac-mini/
- Board purchased from: Apple (di…
-
## 环境信息
- GPU:A100
- 显存:40G
- SWIFT版本:v2.5.2
## 训练脚本
```
CUDA_VISIBLE_DEVICES=0 PYTORCH_CUDA_ALLOC_CONF="expandable_segments:True" swift sft \
--model_type llama3_2-11b-vision-instruct …
-
### What is the issue?
I was trying llama3.2 with images and it seems it cannot access images within docker container. Similar issue was reported at #1620 and I made sure I mounted correct folder and…
-
**Is your feature request related to a problem? Please describe.**
While talking to our bot, the user is allowed to send an image. This image is sent to vision enabled LLM bot. Haystack ChatMessage c…
-
Hi dude, nice code base!
I have a few questions regarding the training time and want to double check with you.
I'm training the llama-3.2-vision-instruct-11B model on a customized dataset with fu…
-
Now LLAMA 3.1 is out, but sadly it is not loadable with current text-generation-webui. I tried to update transformers lib which makes the model loadable, but I further get an error when trying to use …
-
Trying to use `Llama3.2-11B-Vision-Instruct` with no PromptGuard / LlamaGuard, along with `llama-stack-apps/app/main.py`. Perhaps I can disable Safety with the Agents logic somehow?
Getting this er…