Open budiony opened 3 months ago
On the way
One important thing when developing the feature: the document embeddings must be model-based in order to match the model's context window size, otherwise they will be useless. I.e. when generating embeddings, the platform should take into consideration the model chosen (or propose selection of a model with which the embeddings will be used). This way if user have 5 downloaded models, it should have 5 separate databases with the embeddings: 1 for every model.
Implement documents embedding (.txt, . csv, etc.) in LM Studio, this way the users will be able to use the power of AI for real. The gpt4all has this feature already implemented, but it is a buggy.