Open GregorBiswanger opened 1 month ago
@GregorBiswanger I recently added support for this... You can use embeddings hosted by any OpenAI compliant server, like llama.cpp. When you create the OpenAIEmbedding class just pass in the ossModel
you're using and the ossEndpoint
of your server.
The idea was to be able to set support for the local LLM via CLI. A profile is defined with a URI, for example for LM Studio localhost:1234 based on OpenAI API structure. You can then index any website and search it semantically via the CLI. An nice to have with vectra... :)
I would like to request a feature enhancement for the vectra CLI functionality. Specifically, I would like to have the option to use local Large Language Models (LLMs) instead of relying solely on OpenAI's API.
Feature Details:
Local LLM Integration:
Global Configuration:
CLI Commands:
Benefits:
Thank you for considering this feature request. I believe it will greatly enhance the functionality and usability of the vectra CLI.
Cheers, Gregor