Open sebaxzero opened 4 months ago
@sebaxzero is attempting to deploy a commit to the rashadphil's projects Team on Vercel.
A member of the Team first needs to authorize it.
Support for a custom openai api, using default llama_index.llms.openai (no requirement change)
backend:
- created "CUSTOM" model mapping with "gpt-4" as its name for llama_index to use 8k context.
- added elif statement to chat, related queries and validator for ChatModel.CUSTOM
frontend:
- added Custom API as selectable model.
docker:
- added
CUSTOM_HOST
andCUSTOM_API_KEY
env variables with lm-studio url set by default (http://localhost:1234/v1)enviromental variables:
CUSTOM_HOST=your-custom-host CUSTOM_API_KEY=your-custom-api-key
.env example for lm-studio server:
CUSTOM_HOST=http://localhost:1234/v1 CUSTOM_API_KEY=local
api-key in most cases is not nedded.
why use custom instead of openai naming?
llama_index uses OPENAI_API_BASE while openai uses OPENAI_BASE_URL adding the new variable make sure it does not conflict with cloud openai usage.
tested with lm-studio local server using:
- llama3 8B (also some finetunes versions like Herme 2 Theta)
- mistral v0.3 7B
- phi 3 mini
preview:
can you add cohere support for search also?
Support for a custom openai api, using default llama_index.llms.openai (no requirement change)
https://github.com/rashadphz/farfalle/issues/1#issue-2302369769
backend:
frontend:
docker:
CUSTOM_HOST
andCUSTOM_API_KEY
env variables with lm-studio url set by default (http://localhost:1234/v1)enviromental variables:
.env example for lm-studio server:
api-key in most cases is not nedded.
why use custom instead of openai naming?
llama_index uses OPENAI_API_BASE while openai uses OPENAI_BASE_URL adding the new variable make sure it does not conflict with cloud openai usage.
tested with lm-studio local server using:
preview: