Leon-Sander / local_multimodal_ai_chat

GNU General Public License v3.0
101 stars 66 forks source link

OSError: [WinError -529697949] Windows Error 0xe06d7363 while implementing llava after inserting image and pressing submit button #4

Closed AyushPathak2610 closed 5 months ago

AyushPathak2610 commented 5 months ago

OSError: [WinError -529697949] Windows Error 0xe06d7363 Traceback: File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\chat_venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script exec(code, module.dict) File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\app.py", line 118, in main() File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\app.py", line 94, in main llm_answer = handle_image(uploaded_image.getvalue(), st.session_state.user_question) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\image_handler.py", line 11, in handle_image chat_handler = Llava15ChatHandler(clip_model_path="./models/llava/ggml-model-q5_k.gguf") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\chat_venv\Lib\site-packages\llama_cpp\llama_chat_format.py", line 1235, in init self.clip_ctx = self._llava_cpp.clip_model_load( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\projects\ml ai\LOCAL_MULTIMODAL_AI_CHAT\chat_venv\Lib\site-packages\llama_cpp\llava_cpp.py", line 174, in clip_model_load return _libllava.clip_model_load(fname, verbosity) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

all file paths and code are as stated in video, tried searching on stack overflow but no proper solution found

Leon-Sander commented 5 months ago

The Error is in this line:

chat_handler = Llava15ChatHandler(clip_model_path="./models/llava/ggml-model-q5_k.gguf")

The ggml model is the large language model, in this line the mmproj, which is the clip model, must be inserted.

    chat_handler = Llava15ChatHandler(clip_model_path="./models/llava/mmproj-model-f16.gguf")
    llm = Llama(
    model_path="./models/llava/llava_ggml-model-q5_k.gguf",
    chat_handler=chat_handler,
    logits_all=True,
    n_ctx=1024 # n_ctx should be increased to accomodate the image embedding
    )