Open Sjdavis25 opened 2 months ago
Did your problem solved? I tried Gemma2 and it doesn't work as well.
No, unfortunately I'm still having the same problem, still waiting for some sort of response
Hey, sorry for the late reply. Can you share the logs, if possible?
Are you using admin/api/chat or the playground (UI)
I'm using the playground UI
my problem solved. You should pick the model which is uploaded not the one you named. For me, I unseen all the unuse mode (e.g. from Open AI or Github), so I can get a clear list from Ollama.
When I try to pick a model I oly get the ones I named, am I uploading my models wrong?
Hey, delete the current model, then go to Admin > Application, and turn on the fetch Ollama model dynamically
Got it to work thank you!!
I've seen online that ollama works when using Gemma2 but nothing mething any ollama version of llama3. I want to create a chatbot that is based off of my local version of llama3 but every time I upload my ollama version of llama3 no matter what the embedded method is I can never recive a proper response. Whenever I try and use the playground chat I receive the message "There was an error processing your request" and whether or not I have data sources uploaded does not seem to affect this issue either.