-
### What is the issue?
I tried to start llava:1.6 (or any similar llava based modell) an the llama server terminated.
llama3 modell or different non llava models work just fine.
GPU is: NVIDIA Ge…
-
I've been trying to deploy the new LLaVA-NeXT with Sglang on Modal but not sure why I'm getting "Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tun…
-
There is a new version of the Amazing LLava model that uses Llama 3 or Phi-3:
https://huggingface.co/collections/MBZUAI/llava-llama-3-and-phi-3-mini-662b38b972e3e3e4d8f821bb
https://github.com/m…
-
File "F:\vicuna\oobabooga_windows\text-generation-webui\modules\ui_model_menu.py", line 201, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File …
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
### Describe the bug
i have fineturne xcomp…
-
### Question
Hello, I want to change the LLaVA base model from llama2 to llama3, but I encountered error during executing these lines:
input_ids = torch.nn.utils.rnn.pad_sequence(
input_ids,
bat…
-
What am i doing wrong here !!! ? Why i can't i get image to prompt, i also tried with other models i can only get txt2image work
![image](https://github.com/if-ai/ComfyUI-IF_AI_tools/assets/41647…
-
我自己复现LLAVA v1.5-llama3, 在A800上, 用的原版训练超参数和脚本, 大概一阶段5h, 二阶段20h, 想请教您是如何缩短到两阶段11h的?
-
This is a ticket to track a wishlist of items you wish LiteLLM had.
# **COMMENT BELOW 👇**
### With your request 🔥 - if we have any questions, we'll follow up in comments / via DMs
Respond …
-
frame #6: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x10c (0x7f3f32cae6fc in /opt/conda/envs/vrm/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #7: + 0xd3e95 (0x7f40271b5e95 in /o…