-
https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5
We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary…
-
### 📦 Environment
Vercel
### 📌 Version
v1.26.11
### 💻 Operating System
Windows
### 🌐 Browser
Chrome
### 🐛 Bug Description
When "Get Model List" is pressed on Github, it reports "0 models avai…
-
**Paper**
Character-level Convolutional Networks for Text Classification
**Introduction**
In the realm of text classification, most models have considered the words as the building blocks. This r…
-
In autoawq, do we only quantize the LLM part of Llava or do we also quantize the ViT ? Can we add support for quantizing the vision models like ViT or SIGLIP?
-
# Papers
- Sapiens: Foundation for Human Vision Models
- 메타에서 나온 Human foundation model ㄷㄷㄷ
- 2D pose estimation, body-part segmentation, depth prediction and normal prediction이 하나의 모델에서 …
-
- [ ] [MoAI/README.md at master · ByungKwanLee/MoAI](https://github.com/ByungKwanLee/MoAI/blob/master/README.md?plain=1)
# MoAI/README.md at master · ByungKwanLee/MoAI
## Description
![MoAI: Mixture…
-
Hi,
Congrats on the impressive work! Our paper FAITHSCORE: Evaluating Hallucinations in Large Vision-Language Models is very related to your topic.
I wonder if you would mind adding our work to your…
-
Thanks for the repo and models! When trying to run demo.sh with the 34b model (commented and uncommented the relevant lines), I am getting nonsense output (with the example video and prompt):
```
##…
-
### Your current environment
Code :
!pip install vllm
from vllm import LLM, SamplingParams
# choosing the large language model
llm = LLM(model="AdaptLLM/finance-chat")
# setting the p…
-
You will see the problem in the text below, this is with using gpt-4o and version 0.5 of agent zero, but have similar issues with other models
User message ('e' to leave):
> Write a college level …