Hi @shuxueshuxue it should technically be possible by first setting OpenAI Proxy Base URL to: https://api.openai.com/v1 and LocalAI model to ft:gpt-3.5-turbo:my-org:custom_suffix:id. You can try this for now, but this isn't ideal. It might be good to implement a "Custom Model" command that allows you to specify certain settings like the model, base url, temperature, token limit, etc. Then be able to choose your custom model in the chat drop-down.
Since I don't have any fine-tuned models, below is an example where I set LocalAI model to gpt-3.5-turbo instead (notice how I selected LocalAI):
Hi @shuxueshuxue it should technically be possible by first setting OpenAI Proxy Base URL to:
https://api.openai.com/v1
and LocalAI model toft:gpt-3.5-turbo:my-org:custom_suffix:id
. You can try this for now, but this isn't ideal. It might be good to implement a "Custom Model" command that allows you to specify certain settings like the model, base url, temperature, token limit, etc. Then be able to choose your custom model in the chat drop-down.Since I don't have any fine-tuned models, below is an example where I set LocalAI model to
gpt-3.5-turbo
instead (notice how I selected LocalAI):