Closed ebdavison closed 1 year ago
Same here !
Thanks for the issue. Might be some issue from llama.cpp. Will have a look.
same happening on my macbook m1 here
same happening on my macbook m1 here
Fixed it by using gguf model
@ebdavison @jamartinh @wtryc hi now the new release llama2-wrapper=0.1.13 will lock on llama-cpp-python = "0.1.77" to support old ggml models. New release will support gguf models.
Trying to run this with CPU only and followed the instruction to install and run this on Linux.
Here is what I get:
My environment: