NJU-LHRS / LHRS-Bot

VGI-Enhanced multimodal large language model for remote sensing images.
Apache License 2.0
89 stars 8 forks source link

is a GPU necessary? #22

Closed yousofaly closed 1 month ago

yousofaly commented 1 month ago

I am trying to run lhrs_webui.py with the --cpu-only flag but am running into a CUDA issue. This is local on my macbook pro

(lhrs) LHRS-Bot % python lhrs_webui.py -c Config/multi_modal_eval.yaml --cpu-only  
[2024-08-07 09:33:54,229] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to mps (auto detect)
[2024-08-07 09:33:54,358] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs.
--------- error here --------
cpu
/opt/anaconda3/envs/lhrs/lib/python3.10/site-packages/huggingface_hub/file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Loading checkpoint shards:   0%|                                                                                                                                  | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/Users//Desktop/projects/LHRS-Bot/lhrs_webui.py", line 918, in <module>
    model, tokenizer = _load_model_tokenizer(args)
  File "/Users//Desktop/projects/LHRS-Bot/lhrs_webui.py", line 80, in _load_model_tokenizer
    model = build_model(config, activate_modal=("rgb", "text"))
  File "/Users//Desktop/projects/LHRS-Bot/lhrs/models/build.py", line 23, in build_model
    model = build_vlm_model(config, activate_modal=activate_modal)
  File "/Users//Desktop/projects/LHRS-Bot/lhrs/models/build.py", line 14, in build_vlm_model
    model = UniBind(activate_modal, config)
  File "/Users//Desktop/projects/LHRS-Bot/lhrs/models/UniBind.py", line 44, in __init__
    self.add_module(modal, MODAL_MAPPING[modal](config))
  File "/User/Desktop/projects/LHRS-Bot/lhrs/models/text_modal.py", line 115, in __init__
    self.text_encoder = CustomLlamaForCausalLM.from_pretrained(
  File "/opt/anaconda3/envs/lhrs/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3694, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/opt/anaconda3/envs/lhrs/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4104, in _load_pretrained_model
    new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
  File "/opt/anaconda3/envs/lhrs/lib/python3.10/site-packages/transformers/modeling_utils.py", line 778, in _load_state_dict_into_meta_model
    set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
  File "/opt/anaconda3/envs/lhrs/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 317, in set_module_tensor_to_device
    new_value = value.to(device)
  File "/opt/anaconda3/envs/lhrs/lib/python3.10/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

The error looks like it's coming from build_vlm_model in build.py which calls UniBind. But shouldn't passing the --cpu-only arg in the command set the device to 'cpu'?

python lhrs_webui.py -c Config/multi_modal_eval.yaml --cpu-only

pUmpKin-Co commented 1 month ago

Hi~

We have't test for CPU of Apple Sillicon. It's seems that the error happened when loading LLM. You can change this line into cpu for a try.