Closed sebaxzero closed 1 month ago
Hey @sebaxzero this is amazing thank you very much for the effort. I've added a new documentation page specifically for this here: https://github.com/assafelovic/gpt-researcher/blob/master/docs/docs/gpt-researcher/llms.md Please merge it to this branch and feel free to update it with all information to make it run. Thank you!
also @sebaxzero in your PR you removed the default embeddings. Would you mind adding another param for custom
and to support custom endpoints?
@sebaxzero and @assafelovic wouldn't it be clearer to name the environment variable OPENAI_EMBEDDING_MODEL
to be consistent with AZURE_EMBEDDING_MODEL
and OLLAMA_EMBEDDING_MODEL
I have used for Ollama.
I have added the custom case to EMBEDDING_PROVIDER
and set the environment variable for model name to OPENAI_EMBEDDING_MODEL
. Default values have been configured for lmstudio.
EMBEDDING_PROVIDER="custom"
OPENAI_BASE_URL="http://localhost:1234/v1"
OPENAI_API_KEY="custom"
OPENAI_EMBEDDING_MODEL="custom"
Thank you @sebaxzero !! Can you confirm you've tested it and that I can merge?
@assafelovic yes
example of .env for lm studio:
key does not matter but is required to load, model does not matter and is not required.
i just tested researching with mistral v2 7b q8 and nomic embed text f16 and it worked better than expected