Closed dawn-ech closed 4 months ago
Hi @dawn-ech,
This is caused by the auto setting in the weight loading code following llava. Add CUDA_VISIBLE_DEVICES setting will solve this, like:
CUDA_VISIBLE_DEVICES=0 bash demo/demo.sh demo/1vn.jpg "In this picture, identify and locate all the people in the front."
This problem is solved. Thanks!
Thanks for your great work! I am trying to run demo.sh on 2x 3090, but I get this error. The problem is located in
llava/model/multimodal_encoder.py
image_forward_outs = self.vision_tower(images, output_hidden_states=True)
. Here,images
is on cuda:0, but I print its device in forward function (transformers.models.clip.CLIPVisionModel.forward
), getting cuda:1. Could you give me some suggestions about it?