Closed taranjeet closed 3 months ago
I second this ask. Unfortunately all-MiniLM-L6-v2 can only work with 256 tokens which is fairly limiting. GPT4All's default embedding model (ggml-all-MiniLM-L6-v2-f16) can do even several k so it would be amazing to use it.
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: ...)
Repository Not Found for url: https://huggingface.co/api/models/sentence-transformers/ggml-all-MiniLM-L6-v2-f16.
Using embedChain with GPT4All is something I'm looking forward to as well.
Closing this issue as GPT4All functionality is more robust now and can handle different models. Please feel free to open this issue again if there's still a problem.
Hey @taranjeet I loved embedchain and wanted to use it in one of my projects. I wanted to use my custom model / GPT4All models, do you have any documentation related to it? Current default model is all-MiniLM-L6-v2