sled-group / chat-with-nerf

Chat with NeRF enables users to interact with a NeRF model by typing in natural language.
https://chat-with-nerf.github.io
Apache License 2.0
302 stars 19 forks source link

ImportError: cannot import name 'KeywordsStoppingCriteria' from 'llava.model.utils' #19

Closed Orwlit closed 1 year ago

Orwlit commented 1 year ago

Thanks for you repo!

I noticed that you have integrated llava into this project, so I cloned llava v1.0.2 into this project root dir and installed it using cd LLaVA ; pip install -e .. I am sure that all dependencies are installed properly. I am using Python 3.10 and CUDA v11.7.

However, when I ran export $(cat .env | xargs); gradio chat_with_nerf/app.py, I encountered an ImportError in chat-with-nerf/chat_with_nerf/visual_grounder/captioner.py. Especially this line from llava.model.utils import KeywordsStoppingCriteria

Then I checked llava to find KeywordsStoppingCriteria in chat-with-nerf/LLaVA/llava/model/utils.py and I found nothing related to KeywordsStoppingCriteria. The full content of chat-with-nerf/LLaVA/llava/model/utils.py is:

from transformers import AutoConfig

def auto_upgrade(config):
    cfg = AutoConfig.from_pretrained(config)
    if 'llava' in config and 'llava' not in cfg.model_type:
        assert cfg.model_type == 'llama'
        print("You are using newer LLaVA code base, while the checkpoint of v0 is from older code base.")
        print("You must upgrade the checkpoint to the new code base (this can be done automatically).")
        confirm = input("Please confirm that you want to upgrade the checkpoint. [Y/N]")
        if confirm.lower() in ["y", "yes"]:
            print("Upgrading checkpoint...")
            assert len(cfg.architectures) == 1
            setattr(cfg.__class__, "model_type", "llava")
            cfg.architectures[0] = 'LlavaLlamaForCausalLM'
            cfg.save_pretrained(config)
            print("Checkpoint upgraded.")
        else:
            print("Checkpoint upgrade aborted.")
            exit(1)

Seems like this is an version error due to a newer version of llava. Can you please fix this bug, thanks!

Full error message:

Launching in *reload mode* on: http://127.0.0.1:7860 (Press CTRL+C to quit)

Watching: '/home/uestc/.conda/envs/nerfstudio/lib/python3.10/site-packages/gradio', '/home/uestc/wyz/chat-with-nerf/chat_with_nerf'

[2023-09-12 15:54:42,870] INFO torch.distributed.nn.jit.instantiator [<module>] [instantiator.py:21] - Created a temporary directory at /tmp/tmpd5tj8jvr
[2023-09-12 15:54:42,870] INFO torch.distributed.nn.jit.instantiator [_write] [instantiator.py:76] - Writing /tmp/tmpd5tj8jvr/_remote_module_non_scriptable.py
[2023-09-12 15:54:42,900] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/home/uestc/.conda/envs/nerfstudio/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/uestc/.conda/envs/nerfstudio/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/uestc/.conda/envs/nerfstudio/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/home/uestc/.conda/envs/nerfstudio/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/home/uestc/.conda/envs/nerfstudio/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/home/uestc/.conda/envs/nerfstudio/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/uestc/.conda/envs/nerfstudio/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/home/uestc/.conda/envs/nerfstudio/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/home/uestc/.conda/envs/nerfstudio/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/home/uestc/.conda/envs/nerfstudio/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/uestc/wyz/chat-with-nerf/chat_with_nerf/app.py", line 10, in <module>
    from chat_with_nerf.chat import agent
  File "/home/uestc/wyz/chat-with-nerf/chat_with_nerf/chat/agent.py", line 12, in <module>
    from chat_with_nerf.chat.grounder import ground_with_callback
  File "/home/uestc/wyz/chat-with-nerf/chat_with_nerf/chat/grounder.py", line 6, in <module>
    from chat_with_nerf.visual_grounder.captioner import BaseCaptioner
  File "/home/uestc/wyz/chat-with-nerf/chat_with_nerf/visual_grounder/captioner.py", line 10, in <module>
    from llava.model.utils import KeywordsStoppingCriteria
ImportError: cannot import name 'KeywordsStoppingCriteria' from 'llava.model.utils' (/home/uestc/wyz/chat-with-nerf/LLaVA/llava/model/utils.py)
XuweiyiChen commented 1 year ago

Hello,

Thank you for your interest in our project. While we greatly appreciate the active LLaVA community, please be advised that we may not be able to immediately accommodate every future change made by LLaVA. For reference, we worked with the LLaVA version associated with commit hash 8b21169, which can be found specified in our dockerfile.

Additionally, for enhanced performance, we've made modifications to a few functions within NeRFStudio and LeRF to optimize vRAM usage and speed. To ensure a seamless installation experience, we recommend using the Docker image we provide.

Thanks again for reaching out, and let us know if you have any further questions!

barshag commented 1 year ago

Same here. I use the docker & reinstall the specified version and got error

Launching in *reload mode* on: http://127.0.0.1:7860 (Press CTRL+C to quit)

Watching: '/home/user/.local/lib/python3.10/site-packages/gradio', '/workspace/chat-with-nerf/chat_with_nerf'

[2023-09-19 18:42:12,044] INFO torch.distributed.nn.jit.instantiator [<module>] [instantiator.py:21] - Created a temporary directory at /tmp/tmp_2oi7dxq
[2023-09-19 18:42:12,044] INFO torch.distributed.nn.jit.instantiator [_write] [instantiator.py:76] - Writing /tmp/tmp_2oi7dxq/_remote_module_non_scriptable.py
[2023-09-19 18:42:12,419] INFO chat_with_nerf [initialize_model_context] [model_context.py:52] - Search for all Scenes and Set the current Scene
[2023-09-19 18:42:12,419] INFO chat_with_nerf [initialize_model_context] [model_context.py:55] - Initialize Captioner
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
    return future.result()
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/workspace/chat-with-nerf/chat_with_nerf/app.py", line 10, in <module>
    from chat_with_nerf.chat import agent
  File "/workspace/chat-with-nerf/chat_with_nerf/chat/agent.py", line 19, in <module>
    model_context: ModelContext = ModelContextManager.get_model_context()
  File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 41, in get_model_context
    cls.model_context = ModelContextManager.initialize_model_context()
  File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 56, in initialize_model_context
    captioner = ModelContextManager.initiaze_llava_captioner()
  File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 88, in initiaze_llava_captioner
    tokenizer = AutoTokenizer.from_pretrained(model_name)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 622, in from_pretrained
    tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 466, in get_tokenizer_config
    resolved_config_file = cached_file(
  File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 409, in cached_file
    resolved_file = hf_hub_download(
  File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
    validate_repo_id(arg_value)
  File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/workspace/pre-trained-weights/LLaVA/LLaVA-13B-v0'. Use `repo_type` argument if needed.
  File "/workspace/chat-with-nerf/chat_with_nerf/chat/agent.py", line 19, in <module>
    model_context: ModelContext = ModelContextManager.get_model_context()
  File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 41, in get_model_context
    cls.model_context = ModelContextManager.initialize_model_context()
  File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 56, in initialize_model_context
    captioner = ModelContextManager.initiaze_llava_captioner()
  File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 88, in initiaze_llava_captioner
    tokenizer = AutoTokenizer.from_pretrained(model_name)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 622, in from_pretrained
    tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 466, in get_tokenizer_config
    resolved_config_file = cached_file(
  File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 409, in cached_file
    resolved_file = hf_hub_download(
  File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
    validate_repo_id(arg_value)
  File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/workspace/pre-trained-weights/LLaVA/LLaVA-13B-v0'. Use `repo_type` argument if needed
jedyang97 commented 1 year ago

@barshag you error is caused by not having downloaded the LLaVA checkpoint into /workspace, so when the ModelContextManager try to load LLaVA it would fail.

I just updated the README to include instructions on how to construct the LLaVA checkpoint. Please try these steps and let us know if it doesn't work! image