rsaryev / talk-codebase

Tool for chatting with your codebase and docs using OpenAI, LlamaCpp, and GPT-4-All
MIT License
490 stars 41 forks source link

Cant seem to get it to work. #8

Closed nemesis911 closed 1 year ago

nemesis911 commented 1 year ago

tyron@Tyrons-Air ~ % talk-codebase chat . 🤖 Config path: /Users/tyron/.talk_codebase_config.yaml: Found model file at /Users/tyron/.cache/gpt4all/ggml-wizardLM-7B.q4_2.bin llama.cpp: loading model from /Users/tyron/.cache/gpt4all/ggml-wizardLM-7B.q4_2.bin error loading model: unrecognized tensor type 4

llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/bin/talk-codebase", line 8, in sys.exit(main()) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/talk_codebase/cli.py", line 55, in main raise e File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/talk_codebase/cli.py", line 48, in main fire.Fire({ File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace component = fn(*varargs, kwargs) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/talk_codebase/cli.py", line 41, in chat llm = factory_llm(root_dir, config) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/talk_codebase/llm.py", line 118, in factory_llm return LocalLLM(root_dir, config) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/talk_codebase/llm.py", line 23, in init self.llm = self._create_model() File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/talk_codebase/llm.py", line 96, in _create_model llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, n_batch=model_n_batch, callbacks=callbacks, verbose=False) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in init super().init(kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp root Could not load Llama model from path: /Users/tyron/.cache/gpt4all/ggml-wizardLM-7B.q4_2.bin. Received error (type=value_error) Exception ignored in: <function Llama.del at 0x11d5db9a0> Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_cpp/llama.py", line 1445, in del if self.ctx is not None: AttributeError: 'Llama' object has no attribute 'ctx'

rsaryev commented 1 year ago

Hello!! Change please python version /opt/homebrew/Cellar/python@3.11 to python = ">=3.8.1,<4.0"

rsaryev commented 1 year ago

Please update talk-codebase pip install --upgrade talk-codebase==0.1.46