-
我的路径是E:\ComfyUI-aki-v1.3, 模型下载却是E:\comfyui
-----------------------------------------------------------------
# ComfyUI Error Report
## Error Details
- **Node Type:** Joy_caption
- **Exception T…
-
Hi @dusty-nv, thanks for this amazing library! We're using it in a cool art project for Burning Man :-)
I tested the new llava 1.6 (specifically https://huggingface.co/lmms-lab/llama3-llava-next-8b…
-
Hi! I would like to ask few questions regarding the visual encoder part.
1. How does the SpatialBot model load the SigLip pre-trained model? I have downloaded the `siglip-so400m-patch14-384` model…
-
got prompt
D:\Comfy_UI\ComfyUI\models\clip\siglip-so400m-patch14-384
!!! Exception during processing !!!
Traceback (most recent call last):
File "d:\Comfy_UI\ComfyUI\execution.py", line 323, in …
-
Hi !! We were trying to LORA finetune LLaVa interleave for a autocompletion task on a dataset (DialogCC) that might contain many images (>10) per conversation.
- Is it possible to reduce the number …
-
When loading the visual encoder (SigLip model), I got the error below:
`ValueError: SiglipVisionModel does not support Flash Attention 2.0 yet.`
Is there a way to fix this?
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
See https://github.com/milvus-io/milvus/discuss…
-
Hi,
Thank you for sharing the Mantis source code.
I trained your LLaMA3 model with SigLIP on my dataset. The model saves a checkpoint every 500 steps. I would like to merge the LoRA weights from…
-
With all the growing activity and focus on multimodal models is this library restricted to tune text only LLM?
Do we plan to have Vision or more in general multimodal models tuning support?
-
```
INFO:mteb.cli:Running with parameters: Namespace(model='laion/CLIP-ViT-B-16-DataComp.XL-s13B-b90K', task_types=None, categories=None, tasks=['BLINKIT2IRetrieval'], languages=None, device=None, ou…