Open ugmqu opened 1 year ago
Can yu confirm that project via docker is working? if yes what step you take to solve that issue
Same for me, just tried for the first time and it seems super slow.
Same here on my MacBook Pro (Apple M1 Pro) with Mac OS Monterey (12.6.3).
I selected "ai-dialog" template from web ui and clicked "Go". Now it's running for hours.
With debug
on:
main: seed = 1687792550
llama_model_load: loading model from 'models/7B/ggml-model-q4_0.bin' - please wait ...
llama_model_load: invalid model file 'models/7B/ggml-model-q4_0.bin' (bad magic)
main: failed to load model from 'models/7B/ggml-model-q4_0.bin'
root@506e82aeadad:~/dalai/alpaca# exit
exit
Hi,
I installed the project via docker.
An simnple prompt does not compute after an hour. Tried Alpaca 7B and 13B on both Windows and Ubuntu via Docker. I have the assumption the project runs on super low resources for some reason.
"docker stats" gives me the following output with the container running and an active prompt beeing processed: