yoheinakajima / babyagi

MIT License
19.73k stars 2.58k forks source link

Fix freeze with threads in local model #253

Closed jmtatsch closed 1 year ago

jmtatsch commented 1 year ago

Currently there is a bug https://github.com/abetlen/llama-cpp-python/issues/110 that will freeze llama_cpp if n_threads > number of (hyper) threads. Thus set threads to a lower value all users will certainly have.