Open ghazalsaheb opened 1 week ago
Hey!
CogVLM uses custom code from the hub when you set trust_remote_code=True
and the model is not yet added to transformers. There is an open PR here to port the model to transformers, which is in progress afaik, cc @NielsRogge
For the device mismatch issue, please open an issue in the THUDM/cogvlm2-llama3-chat-19B
hub repo.
Hey!
CogVLM uses custom code from the hub when you set
trust_remote_code=True
and the model is not yet added to transformers. There is an open PR here to port the model to transformers, which is in progress afaik, cc @NielsRoggeFor the device mismatch issue, please open an issue in the
THUDM/cogvlm2-llama3-chat-19B
hub repo.
@ zucchini-nlp I see, thanks. By THUDM/cogvlm2-llama3-chat-19B
you mean here?
System Info
Who can help?
@ArthurZucker @amyeroberts @Narsil @muellerzr @SunMarc
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Human input when running the code: "please describe this image"
Expected behavior
It should be able to distribute the model on multiple GPU cards and run inference when data is only on one card, and generate the caption for each human prompt, but I get the following error: (I also tried defining my own device map instead of using 'auto' similar to here, but it gives the same error)