coleam00 / bolt.new-any-llm

Prompt, run, edit, and deploy full-stack web applications using any LLM you want!
https://bolt.new
MIT License
3.89k stars 1.6k forks source link

Bugfix Issue 259 - WIP HARD Fix using local Ollama usage #306

Open dctfor opened 5 days ago

dctfor commented 5 days ago

Fixes this issue about not being able to use local ollama yet is a hard fix, there is work to be done to allow it to be dynamically chosen the model from the dropdown as it was taken claude and the model is set by now to "llama3.1:8b" and hardcoded the ollama to 127.0.0.1:11434

kekePower commented 5 days ago

It's a nice thought, but I run Ollama on another machine so it'd be better to keep it the way it is.

dctfor commented 5 days ago

Here my recent comments on the fix for the model with ollama, now will double check about the faulty baseurl that might be ignored if the only issue was the actual model selection

https://github.com/coleam00/bolt.new-any-llm/issues/259#issuecomment-2481266958

dctfor commented 5 days ago

now should be good with wathever IP adress and model you want to use

chrismahoney commented 3 days ago

Taking a look at this today

mroxso commented 3 days ago

works for me in combination with removing the .envand .env.local definition from the .dockeringore file. thats needed because otherwise ollama gets not called correctly i think. (or you can define it directly in the docker-compose.yaml file)

chrismahoney commented 3 days ago

You identified that *.local in .dockerignore is likely causing some Docker related issues, please see #329 for some details and please feel free to provide feedback there.