jlonge4 / local_llama

This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.
Apache License 2.0
221 stars 31 forks source link

problem with ollama #14

Open odevroed opened 6 months ago

odevroed commented 6 months ago

Hi,

First, this is a great project. I love it!

I tried to run the v3 as I installed a few LLMs with ollama (which works fine). But I keep hitting this error: ValueError: The number of documents in the SQL database (229) doesn't match the number of embeddings in FAISS (0). Make sure your FAISS configuration file points to the same database that you used when you saved the original index.

This happens when I ask any question. It does not change if I upload a document or not. Both give the same error. I checked and ollama is running on port 11434 (the default)

For info, I'm on fedora with Python 3.10.13 in a venv.

jlonge4 commented 6 months ago

@odevroed Thank you for your kind words I am glad you are enjoying it! Also that is my fault, I need to push a fix that deletes the existing indexes each time you run the program again. If you delete both .db files as well as the FAISS config files/json it will work again in the meantime.

odevroed commented 6 months ago

This works perfectly! Thanks a lot.

I wonder, will you implement a more straightforward way to change the model than to change it in the code? Furthermore, I tried with gemma, and the results are not good. Which types of model will work with your v3?

jlonge4 commented 6 months ago

@odevroed That is also on my list haha, the plan is to include a drop down to allow selection of whichever model, dynamically swapping the prompt as well to fit each model for the same task.