-
![image](https://github.com/kijai/ComfyUI-moondream/assets/64715158/bfd128b5-97ec-4801-aca4-a0c1207ae79f)
`Prompt executed in 3.83 seconds
got prompt
Failed to validate prompt for output 17:
* (…
-
### Describe the issue
Issue:
In pretraining or finetuning, the training always stuck after the log "Formatting inputs...Skip in lazy mode". Everytime I need to force shutting down my GPU server b…
-
Hi folks,
As there are multiple issues here regarding fine-tuning DINOv2 on custom data, questions related to semantic segmentation/depth estimation, image similarity and feature extraction etc. th…
-
Hello,
It's a great work! And there are several questions:
1. In the technical report you mentioned
> We find that LoRA empirically leads to better performance than fully tuning across all c…
-
I want to change the image parsing model to openai's clip-vit-large-patch14-336. I directly replaced the mm_vision_tower in the config.json file of bunny's model for openai's clip-vit-large-patch14-33…
-
I tried finetuning my model after stage 1. Apparently, there are tokenization mismatches and the loss is 0.
Do you have any ideas what might be the problem.
Thanks!
sh finetune_full.sh
```WARNIN…
-
模型:https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6
通常,多模态大模型微调会使用自定义数据集进行微调。在这里,我们将展示可直接运行的demo。
在开始微调之前,请确保您的环境已准备妥当。
```bash
git clone https://github.com/modelscope/swift.git
cd swift
…
-
hi,I have the problem when I tried to run the inference.py ? It seems that the vocab_file load is wrong ,how can I fix it?
by the way, I see the parameters VOCAB_FILES_NAMES = {"vocab_file": "xmodel_…
-
I'm trying to run inference on https://huggingface.co/bczhou/TinyLLaVA-2.0B. I've downloaded the model files and am trying to load them locally. When I run the builder file, it attempts to download bc…
-
Hi, I'm playing with the OpenVLA model and I want to evaluate this model on [SimplerEnv](https://github.com/simpler-env/SimplerEnv/tree/main)'s Google robot tasks. Since there is a `unnorm_key` argume…