-
I would like to ask, because open-llava-next updated clip-vit during the training process, does the open-sourced weight (
https://huggingface.co/Lin-Chen/open-llava-next-vicuna-7b ) already inc…
-
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
model = AutoModelForCausalLM.from_pretrained("met…
-
If I set --api-key , then always get invalid api key from server.
e.g.
```
python -m sglang.launch_server --model-path lmms-lab/llama3-llava-next-8b --tokenizer-path lmms-lab/llama3-llava-next-8b…
-
### feature
I hope this message finds you well. I am reaching out to extend my heartfelt gratitude for the assistance you rendered the other day. It was of immense help and I am truly appreciative.…
-
### System Info
transformers=4.44.0
python=3.11
cuda=12.4
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Task…
-
### System Info
**Running in docker**
```
Target: x86_64-unknown-linux-gnu
Cargo version: 1.75.0
Commit sha: 00f365353ea5cf29438ba1d51baadaab79ae4674
Docker label: sha-00f3653
nvidia-smi:
…
-
### Your current environment
Current Environment
Docker image: `vllm/vllm-openai:v0.5.0.post1`
Running as part of a Docker Compose stack. Relevant sections of my `docker-compose.yaml` are bel…
-
llamacpp supports AMD GPUs with ROCm and OpenCL via clblas.
We should create an additional NuGet package for this, and update the native loader to support them.
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
Hi,
Niels here from the open-source team at Hugging Face. It's great to see you're releasing models on HF, I found your work through the paper page: https://huggingface.co/papers/2407.12580.
How…