Closed RohanChacko closed 6 days ago
Hello! you can specify the GPU to run the inference.sh
such as CUDA_VISIBLE_DEVICES=0:
CUDA_VISIBLE_DEVICES=0 python llava/eval/run_llava_3d.py \
--model-path ChaimZhu/LLaVA-3D-7B \
--video-path playground/data/LLaVA-3D-Pretrain/scannet/scene0382_01 \
--query "The related object is located at [-0.085,1.598,1.310,0.159,2.089,1.627,-2.097,0.026,0.1000]. What state is this object?"
Hi, I am trying to run the demo on two rtx 3090 gpus. I face the below error when running
run_llava_3d.py
The error comes when running
image_features = self.get_model().get_vision_tower()(images.flatten(0, 1))
inllava/model/llava_arch.py
.