Closed cicero34 closed 3 days ago
When you created the model with Ollama "ollama create model-name -f model-file", you specified a modelname. That should be entered into the llm_config.py
Specifically change these two lines and you should be good to go!
Thanks for the quick responses. I had not been looking at the llm_config.py file. Now fixed!
This looks like an interesting package. We have installed and can start the script running. But then I get the following error message: System initialization failed: LLM test failed: Ollama API request failed with status 404: {"error":"model 'custom-phi3-32k-Q4_K_M' not found"} The model running is phi3:medium-128k and that is entered in the modelfile as "FROM phi3:medium-128k". Do we need to change the model or the modelfile in some way?