khoj-ai / khoj

Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (e.g gpt, claude, gemini, llama, qwen, mistral).
https://khoj.dev
GNU Affero General Public License v3.0
14.19k stars 705 forks source link

Problem initiating model #610

Closed edbock closed 9 months ago

edbock commented 10 months ago

I'm trying out the local option of Khoj. I got python packages, postgres, and pgvector all installed and working correctly. Both the web and desktop interfaces work. When I query Khoj it successfully downloads the gpt4all model I chose. However, after that Khoj gives me the error:

LLModel.list_gpu() missing 1 required positional argument: 'n_ctx'

I don't have GPU on this machine. Could that have something to do with it?

                  │ /home/ed/.local/lib/python3.10/site-packages/khoj/processor/conversation/offline/utils.py:26 in download_model                         │               
                   │                                                                                                                                        │               
                   │   23 │   │   # 2. Machine has GPU                                                                                                      │               
                   │   24 │   │   # 3. GPU has enough free memory to load the chat model                                                                    │               
                   │   25 │   │   device = (                                                                                                                │               
                   │ ❱ 26 │   │   │   "gpu" if state.chat_on_gpu and                                                                                        │               
                   │      gpt4all.pyllmodel.LLModel().list_gpu(chat_model_config["path"]) else "cpu"                                                        │               
                   │   27 │   │   )                                                                                                                         │               
                   │   28 │   except ValueError:                                                                                                            │               
                   │   29 │   │   device = "cpu"                                                                                                            │               
                   ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯               
                   TypeError: LLModel.list_gpu() missing 1 required positional argument: 'n_ctx'                                                                            
          INFO     127.0.0.1:42044 - "GET /api/chat?q=where%20is%20Waldo%3F&n=5&client=web&stream=true HTTP/1.1" 500      
chrisBinder commented 10 months ago

Hey i ran in the same problem. Try running khoj with the --disable-chat-on-gpu option khoj --anonymous-mode --disable-chat-on-gpu

edbock commented 9 months ago

That worked! Thank you very much 🙏️

debanjum commented 9 months ago

@edbock thanks for raising this issue and @chrisBinder thanks for providing the mitigation! I've also pushed a fix for the issue in #611, so it doesn't fail where it did for @edbock. This was due to a change in the signature of the GPT4All.list_gpu in the latest release of that dependency. But you'd still need to pass --disable-chat-on-gpu if you're running on a machine without GPU