Open ControlNet opened 3 months ago
I try this CUDA_VISIBLE_DEVICES=xx before command. :)
I try this CUDA_VISIBLE_DEVICES=xx before command. :)
Hi hshjerry, thank you for your reply. Yeah if only expose 1 GPU to the system, it will work. However, we're working on a compositional system which load multiple models to multiple GPUs, therefore we have to move each model to corresponding "cuda:x", as the environment variable CUDA_VISIBLE_DEVICES
is shared in the process.
Checklist
Describe the bug
Using the minimal reproduction from the documentation, but load the model in another gpu not
cuda:0
, such ascuda:1
. Thechat
method will fail to generate response.In documentation, the model is loaded as
It works.
But the following code,
The model can be loaded to the correct model, but fail to run
chat
method.I think it is due to these lines. https://github.com/OpenGVLab/InternVL/blob/6a230b34cc04eb2ee51c3ea013362a57ab6a6dc9/internvl_chat/internvl/model/internvl_chat/modeling_internvl_chat.py#L288-L289
It should be
...to("<DEVICE>")
rather than just.cuda()
.Reproduction
Environment
Error traceback