-
hi
I have looked at the model and it is a really powerful model
But the problem is in merging the models together, this consumes a lot of gpu power.
If it is possible to separate the models fro…
-
I'm trying to get fine-tuning working through the 3_sft.sh script but am encountering an error:
```
Traceback (most recent call last):
File "/root/VILA/llava/train/train_mem.py", line 36, in
…
lyluh updated
2 months ago
-
```
======================================================================
ERROR: test_shape_0 (tests.test_transchex.TestTranschex)
-----------------------------------------------------------------…
-
I am trying to follow the tutorial here - https://www.jetson-ai-lab.com/openvla.html. I get the error message attached.
I am able to run the nanoLLM demo here - https://www.jetson-ai-lab.com/tutor…
-
```
[2024-03-20 16:15:45,873] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
config.json: 100%|████████████████████████████████████████████████████████…
-
First of all, thank you for your work. I have a question for you.
I want to fine-tune a complete text encoder model, but it seems that the model trained by ft-B-train-OpenAI-CLIP-ViT-L-14.py is a vis…
-
[[Open issues - help wanted!]](https://github.com/vllm-project/vllm/issues/4194#issuecomment-2102487467)
**Update [11/18] - In the upcoming months, we will focus on performance optimization for mul…
-
# Summary
기존의 VLP는 from scratch로 학습을 시켰지만, 이는 pre-training cost가 너무 크며 기존에 잘 학습되었던 모델 (특히, LLM)에 대한 활용이 어려움. 따라서, frozen vision encoder와 frozen llm을 Q-Former (Querying Transformer)를 통해 잘 이어보는 방식으…
-
你好,我根据readme提供的路径下载了APE-D模型与配置文件
运行脚本如下:
`python demo/demo_lazy.py \
--config-file configs/LVISCOCOCOCOSTUFF_O365_OID_VGR_SA1B_REFCOCO_GQA_PhraseCut_Flickr30k/ape_deta/ape_deta_vitl_eva02_clip_vlf_…
-
I am trying to use this project with a vision-language model like https://huggingface.co/docs/transformers/en/model_doc/llava_next but currently this repo does not support vision part of the model. I …