-
### What is the issue?
**Description:**
I encountered an issue where the **LLaMA 3.2 Vision 11b** model loads entirely in CPU RAM, without utilizing the GPU memory as expected. The issue occurs on m…
-
hi how to get llama-3.2 to work with ipex_llm ?
here's my code.
```
import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
imp…
-
## Describe the bug
I'm trying to use UQFF File in a local environment only, but my sample code is still sending requests to Hugging Face.
I would like to know how to prevent these external requests…
-
What is the minimum single gpu needs for fine-tuning? Does Unsloth support for fine-tuning?
-
In the project: https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336, it gives an examples how to convert llava-llama3 model to hf format:
`
…
-
# https://huggingface.co/tencent/Tencent-Hunyuan-Large
### Model Introduction
With the rapid development of artificial intelligence technology, large language models (LLMs) have made significant…
-
I love your project, I want to use it with local ollama+llava and i tried many way including asking chat gpt.
I am on Windows 11, i tried docker and no go. changed api address from settings in front…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
Dear all,
Thank you so much for sharing the llama3.2 vision model fine-tuning script so fast!
I got the following error when running the demo
```
The model weights are not tied. Please use t…
-
### Model description
MiniCPM-V is a series of Openbmb's vision language models.
We want to add support for MiniCPM-V-2 and later models
### Open source status
- [x] The model implementation is av…