Closed semsion closed 7 months ago
Don't use the model file unless you want to handle the prompt template for yourself.
Just use the model names like you would do with OpenAI. For instance gpt-4-vision-preview
, or gpt-4
are already present in the AIO images
OS: Ubuntu 23.04. CPU: Intel i7-11370H 4.8Ghz (x8) RAM: 32GB
Via a local deployment, when calling the chat/completions endpoint via llamacpp, with the Docker AIO image, and a basic prompt, an unexpected response is being receiving. This has happened repeatedly over multiple tries.
Does anyone have any information by this could possibly be happening?!