-
When connected online the model run just fine, however as soon as I disconnect I get a "failed to resolve hugging.co". I can run other models offline but not this newer one. Is there any way around th…
-
I started using this one and really like it,
https://github.com/InternLM/xtuner/blob/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336/README.md
If you decide to add it I really l…
-
I have below consecutive warnings when I use a llama-7b-hf as pretrain to fine tune using my own data. Is this a problem? Could anyone please guide me on how to fix it?
WARNING: tokenization mismat…
-
run python3 test_httpserver_llava.py
offset = input_ids.index(self.config.image_token_index)
ValueError: 64002 is not in list
def test_streaming(args):
url = f"{args.host}:{args.port}"
…
-
After start pretrain, there is a bug
Traceback (most recent call last):
File "/data2/LLaVA-pp/LLaVA/llava/train/train_mem.py", line 4, in
train(attn_implementation="flash_attention_2")
…
-
Tested and it has very good quality for the size:
https://huggingface.co/openbmb/MiniCPM-V-2
-
### What is the issue?
I've just installed ollama-rocm on CachyOS (https://archlinux.org/packages/extra/x86_64/ollama-rocm/) and the required dependencies, but loading the llama3-chatqa or llava-llam…
ms178 updated
3 months ago
-
I don't know if this is a bug or not, just checking. The original llava LLM-image model allows you to send images. llava with llama3 for example, does not allow to send images, while I do think it's t…
-
作者你好,我在尝试使用e5-v, 在长文本检索的场景中,确实看到比较好的效果,但我在尝试复现时,发现效果和论文中没对齐。
实验实在Flickr30K上进行实验的,以下时实验结果
### 开放的权重e5-v
测试效果如下:
'image_retrieval_recall@1 | 'image_retrieval_recall@5 | 'image_retrieval_recall@10
-…
-
problem received:
**''' AppData\Local\Programs\Python\Python311\Lib\site-packages\ollama\_client.py", line 85, in _stream
raise ResponseError(e.response.text, e.response.status_code) from None
…