Open Murkyy opened 4 months ago
i just noticed that is is the same bug encountered in #10
i change builder.py line 128 :if 'geochat-7b' in model_name.lower():
and geochat_arch.py lines 33 self.vision_tower = build_vision_tower(config, delay_load=False)
,it doesnt work, error like this
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Cdesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)
. just like yours, adding 2 lines of code, demo worked.
could you be more specific about"i found a workaround by renaming the model downloaded from HF "llava" (instead of geochat-7B) "?
could you be more specific about"i found a workaround by renaming the model downloaded from HF "llava" (instead of geochat-7B) "?
the authors are using the llava repository, so there are harcoded strings into the code that require your weights to be named "llava-XXX". when following the demo installation steps, you are downloading weights from HuggingFace named "geochat-7B". thus, renaming these weights is a straightforward solution.
could you be more specific about"i found a workaround by renaming the model downloaded from HF "llava" (instead of geochat-7B) "?
the authors are using the llava repository, so there are harcoded strings into the code that require your weights to be named "llava-XXX". when following the demo installation steps, you are downloading weights from HuggingFace named "geochat-7B". thus, renaming these weights is a straightforward solution.
Could you please specify the location of that code?
could you be more specific about"i found a workaround by renaming the model downloaded from HF "llava" (instead of geochat-7B) "?
the authors are using the llava repository, so there are harcoded strings into the code that require your weights to be named "llava-XXX". when following the demo installation steps, you are downloading weights from HuggingFace named "geochat-7B". thus, renaming these weights is a straightforward solution.
Could you please specify the location of that code?
https://github.com/mbzuai-oryx/GeoChat?tab=readme-ov-file#geochat-weights-and-demo
could you be more specific about"i found a workaround by renaming the model downloaded from HF "llava" (instead of geochat-7B) "?
the authors are using the llava repository, so there are harcoded strings into the code that require your weights to be named "llava-XXX". when following the demo installation steps, you are downloading weights from HuggingFace named "geochat-7B". thus, renaming these weights is a straightforward solution.
Could you please specify the location of that code?
https://github.com/mbzuai-oryx/GeoChat?tab=readme-ov-file#geochat-weights-and-demo
i download those models from https://huggingface.co/MBZUAI/geochat-7B/tree/mainand in my "models_geo" directory, which files from the image do I need to rename?
Thank you for your reply. I run the demo successfully but it dosen't work well. The screenshot below is the fourth demo question on the web and the chat's answering is bad. I have tried many questions but most of the time the GeoChat dose not answer me correctly. I wonder if you could fix this bug next time you update the code in github?
------------------ 原始邮件 ------------------ 发件人: "mbzuai-oryx/GeoChat" @.>; 发送时间: 2024年3月20日(星期三) 下午5:02 @.>; @.**@.>; 主题: Re: [mbzuai-oryx/GeoChat] launching demo (Issue #13)
Hi @derowtosky, you have to change the name in config.json file from llava to geochat.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Hi @Murkyy, thank you for looking out. I have updated the code. It should work now without changing the model name. Let me know if you face any other issues.
the output is all ,why?
I still have this error: RuntimeError: Internal: could not parse ModelProto from llava/tokenizer.model after following the workarounds mentioned above. Anyone could kindly advise how to resolve this? thanks!
I believe the error should be fixed with the latest code update. Please ensure that the download model path is set to geochat
.
hi, thanks for your great work!
i had some issues when launching the demo, as no image_processor was loaded by default (same bug as a comment mentioned in the youtube demo video iirc).
i found a workaround by renaming the model downloaded from HF "llava" (instead of geochat-7B) and by adding 2 lines of code to the "clip_encoder.py" file, line 86:
there is maybe a simpler fix idk, but it worked for me and i could play with the demo.