-
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
modified qa_automation.py with ollama model
llm = Ollama(model="deepseek-coder-v2:16b",request_timeo…
-
I get the following error while running `llava/eval/run_vila.py` on a H100 gpu:
```
root@7513903dd8b0:/src/VILA# python -W ignore llava/eval/run_vila.py --model-path Efficient-Large-Model/VIL…
-
### Motivation
It outperforms existing open-source models like Intern-VL-1.5
https://llava-vl.github.io/blog/2024-05-10-llava-next-stronger-llms/#:~:text=Live%20Demo-,Benchmark%20Results,-Results…
-
![image](https://github.com/InternLM/xtuner/assets/145842232/c3e1ea26-e1d0-456c-95a7-e87d3d09079c)
When I use the chat template recommended by llava-5-7B (llama3)(https://huggingface.co/xtuner/llava-…
-
We can get model info using the following URL
```
curl http://localhost:11434/api/show -d '{
"name": "llama3"
}'
```
Response:
```
{
"modelfile": "# Modelfile generated by \"ollama …
-
![image](https://github.com/stavsap/comfyui-ollama/assets/60506524/ded0ffd9-c01e-460d-a912-15cb4b0664cc)
If I try to get the llava-llama:8b or anything else than "llava" to work, it stucks at the p…
-
Hi. I am using exactly the same code as yours in run_sft.sh:
```
#!/bin/bash
CUR_DIR=`pwd`
ROOT=${CUR_DIR}
export PYTHONPATH=${ROOT}:${PYTHONPATH}
VISION_MODEL=openai/clip-vit-large-pa…
-
### Discussed in https://github.com/ggerganov/llama.cpp/discussions/4350
Originally posted by **cmp-nct** December 7, 2023
I've just seen CovVLM which is a Vicuna 7B language model behind a 9…
-
### 📦 Environment
Docker
### 📌 Version
v1.3.5
### 💻 Operating System
Ubuntu
### 🌐 Browser
Edge
### 🐛 Bug Description
我使用环境变量配置默认的模型为`llama2-chinese`,但是使用新的浏览器开启默认会话,还是会提示填写OpenAI API Key,必须要手…
-
I merge the lora weight and use it for infering with your infer script,but i encounter the error:
(llava) root@bj1oj9u6aucjn-0:/x/tsw/llavapp/LLaVA-pp/LLaVA# python run_llava.py
[2024-05-10 09:15…