Closed Orwlit closed 1 year ago
Hello,
Thank you for your interest in our project. While we greatly appreciate the active LLaVA community, please be advised that we may not be able to immediately accommodate every future change made by LLaVA. For reference, we worked with the LLaVA version associated with commit hash 8b21169, which can be found specified in our dockerfile.
Additionally, for enhanced performance, we've made modifications to a few functions within NeRFStudio and LeRF to optimize vRAM usage and speed. To ensure a seamless installation experience, we recommend using the Docker image we provide.
Thanks again for reaching out, and let us know if you have any further questions!
Same here. I use the docker & reinstall the specified version and got error
Launching in *reload mode* on: http://127.0.0.1:7860 (Press CTRL+C to quit)
Watching: '/home/user/.local/lib/python3.10/site-packages/gradio', '/workspace/chat-with-nerf/chat_with_nerf'
[2023-09-19 18:42:12,044] INFO torch.distributed.nn.jit.instantiator [<module>] [instantiator.py:21] - Created a temporary directory at /tmp/tmp_2oi7dxq
[2023-09-19 18:42:12,044] INFO torch.distributed.nn.jit.instantiator [_write] [instantiator.py:76] - Writing /tmp/tmp_2oi7dxq/_remote_module_non_scriptable.py
[2023-09-19 18:42:12,419] INFO chat_with_nerf [initialize_model_context] [model_context.py:52] - Search for all Scenes and Set the current Scene
[2023-09-19 18:42:12,419] INFO chat_with_nerf [initialize_model_context] [model_context.py:55] - Initialize Captioner
Process SpawnProcess-1:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/user/.local/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
File "/home/user/.local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
return asyncio.run(self.serve(sockets=sockets))
File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/home/user/.local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
config.load()
File "/home/user/.local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
self.loaded_app = import_from_string(self.app)
File "/home/user/.local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/workspace/chat-with-nerf/chat_with_nerf/app.py", line 10, in <module>
from chat_with_nerf.chat import agent
File "/workspace/chat-with-nerf/chat_with_nerf/chat/agent.py", line 19, in <module>
model_context: ModelContext = ModelContextManager.get_model_context()
File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 41, in get_model_context
cls.model_context = ModelContextManager.initialize_model_context()
File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 56, in initialize_model_context
captioner = ModelContextManager.initiaze_llava_captioner()
File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 88, in initiaze_llava_captioner
tokenizer = AutoTokenizer.from_pretrained(model_name)
File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 622, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 466, in get_tokenizer_config
resolved_config_file = cached_file(
File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/workspace/pre-trained-weights/LLaVA/LLaVA-13B-v0'. Use `repo_type` argument if needed.
File "/workspace/chat-with-nerf/chat_with_nerf/chat/agent.py", line 19, in <module>
model_context: ModelContext = ModelContextManager.get_model_context()
File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 41, in get_model_context
cls.model_context = ModelContextManager.initialize_model_context()
File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 56, in initialize_model_context
captioner = ModelContextManager.initiaze_llava_captioner()
File "/workspace/chat-with-nerf/chat_with_nerf/model/model_context.py", line 88, in initiaze_llava_captioner
tokenizer = AutoTokenizer.from_pretrained(model_name)
File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 622, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 466, in get_tokenizer_config
resolved_config_file = cached_file(
File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/workspace/pre-trained-weights/LLaVA/LLaVA-13B-v0'. Use `repo_type` argument if needed
@barshag you error is caused by not having downloaded the LLaVA checkpoint into /workspace
, so when the ModelContextManager
try to load LLaVA it would fail.
I just updated the README to include instructions on how to construct the LLaVA checkpoint. Please try these steps and let us know if it doesn't work!
Thanks for you repo!
I noticed that you have integrated llava into this project, so I cloned
llava v1.0.2
into this project root dir and installed it usingcd LLaVA ; pip install -e .
. I am sure that all dependencies are installed properly. I am using Python 3.10 and CUDA v11.7.However, when I ran
export $(cat .env | xargs); gradio chat_with_nerf/app.py
, I encountered an ImportError inchat-with-nerf/chat_with_nerf/visual_grounder/captioner.py
. Especially this linefrom llava.model.utils import KeywordsStoppingCriteria
Then I checked llava to find
KeywordsStoppingCriteria
inchat-with-nerf/LLaVA/llava/model/utils.py
and I found nothing related toKeywordsStoppingCriteria
. The full content ofchat-with-nerf/LLaVA/llava/model/utils.py
is:Seems like this is an version error due to a newer version of llava. Can you please fix this bug, thanks!
Full error message: