Closed corndog2000 closed 1 year ago
Here is my container config.
I used this as the Model Download URL: https://huggingface.co/EleutherAI/gpt-j-6b/resolve/main/pytorch_model.bin Here is the huggingface.co URL: https://huggingface.co/EleutherAI/gpt-j-6b
Hi, we are using llama.cpp which currently supports GGML versions of the model. https://github.com/abetlen/llama-cpp-python. Is there a GGML version of that model?
Hi, is there a way to run a custom model that is available on huggingface? I tried adding the download link in the docker setup, and it gave an error when running.