-
### System Info
Unrelated to this issue
### Who can help?
@zucchini-nlp @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially s…
-
Hi,
I followed your code to evaluate the `llava-next` model. But I met the following errors (2nd figure).
-
### Model description
https://github.com/huggingface/text-generation-inference/pull/1709
Since the TGI has done the LlaVa support. Would like to know if there is any timeline for the LlaVa support o…
-
我在xtuner中复现了llava-next video并使用image-video混合数据集训练,在相同数据下,llava官方框架的loss下降趋势就很平滑,但是xtuner则是下面这种趋势:
我为image和video设置了不同的modality_length,一个batch应该是混合的,为什么会这样呢?
-
### System Info
NA
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
https://huggingface.co/docs/t…
-
I tried to integrate this with a Chat Interface.
If I provide image its working, if I don't provide image its breaking.
Please make it to work for both, so that it can easily be integrated wit…
-
### System Info
TGI v2.2.0 with the official Docker image.
### Information
- [x] Docker
- [ ] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
### Repr…
-
### System Info
- `transformers` version: 4.44.0
- Platform: Linux-6.8.0-39-generic-x86_64-with-glibc2.39
- Python version: 3.10.14
- Huggingface_hub version: 0.24.5
- Safetensors version: 0.4.…
-
### Motivation
A quantising server for LLaVA-NeXT would be very useful.
### Related resources
https://github.com/LLaVA-VL/LLaVA-NeXT
### Additional context
_No response_
deece updated
3 months ago
-
Nice work! Is there any CLI interface for inference ? Thanks!