danielgross / localpilot

MIT License
3.32k stars 141 forks source link

If the default model is local, the llama.cpp server doesn't run #28

Closed limdingwen closed 5 months ago

limdingwen commented 5 months ago

Seems to just be because the run_server() call is called when using set_target(), but that's not called on startup. Slightly related to #5.

limdingwen commented 5 months ago

Resolved via #16