continuedev / ggml-server-example

An example of running local models with GGML
38 stars 5 forks source link

unpin dependency versions #2

Closed lun-4 closed 1 year ago

lun-4 commented 1 year ago

reasonings:

all of the reasons would "fall apart" once a v1 is declared of llama-cpp-python, but I have no idea if that will happen or not.

sestinj commented 1 year ago

@lun-4 Just saw this, but definitely the right thing to do