Closed humcqc closed 6 months ago
And you want to inject the Ollama or the AllMiniLmL6V2EmbeddingModel here? If the latter, you could just change the injection point to request AllMiniLmL6V2EmbeddingModel
instead of the generic EmbeddingModel
I think it's a bug that the Ollama embedding model bean is registered only with types=[dev.langchain4j.model.embedding.EmbeddingModel, java.lang.Object]
so you can't qualify it in that case, it should also be registered as io.quarkiverse.langchain4j.ollama.OllamaEmbeddingModel
@geoand TBH I'm not sure having a quarkus.langchain4j.embedding-model.provider
property (and same for other model types) and filtering models through that is such a good thing, why shouldn't we simply register everything and let users qualify the provider by declaring the type on the injection point...? This makes it very hard to use multiple providers in a single app, which it seems, might be more common than we thought.
I want to inject ollama models , and just use the constructor of AllMiniLmL6V2EmbeddingModel as parameter of my EmbeddingModelTextClassifier. But there will be certainly use case where we would like to use multi providers.
But I think it's link to the fact quarkus-langchain register automaticaly beans with the corresponding factory
Yeah I think we will have to rework the CDI integration somewhat, register the Ollama model with the concrete type as well, and probably get rid of the *.provider
properties, because injecting a particular model class seems to mess with that quite badly too :/
Could it be link to: https://github.com/quarkiverse/quarkus-langchain4j/blob/main/core/deployment/src/main/java/io/quarkiverse/langchain4j/deployment/BeansProcessor.java#L169
I'm not sure I had this behavior before I upgrade recently.
Unless you depend on the Easy RAG extension this branch shouldn't execute
If you're looking for a workaround that works now, I'd suggest to use a builder instead of injecting via CDI
I'm trying to understand what is deploying a synthetic EmbeddingModel bean when i add the langchain4j-embeddings-all-minilm-l6-v2. because multi embedding providers seems to work when looking at MultipleEmbeddingModelsTest.
But when I just add the langchain4j-embeddings-all-minilm-l6-v2 dependency in this test, all tests are failing with the issue.
Ahaaa, so it's a bit different than I thought - we do support multiple providers if you create separate named configurations to distinguish them (I wrongly thought the model names were just to distinguish multiple configs of one provider) - try looking at https://github.com/quarkiverse/quarkus-langchain4j/blob/main/integration-tests/multiple-providers/src/main/resources/application.properties and do something similar in your app?
In progress, it seems something is different for AllMiniLmL6V2EmbeddingModel. it should be used as default provider because it is not deployed with correct qualifier. I tried with a modelname e3 :
@Inject
@ModelName("e3")
EmbeddingModel fifthNamedModel;
quarkus.langchain4j.e3.embedding-model.provider=dev.langchain4j.model.embedding.AllMiniLmL6V2EmbeddingModel
but did not work.
And with default using:
@Inject
EmbeddingModel fifthNamedModel;
quarkus.langchain4j.embedding-model.provider=dev.langchain4j.model.embedding.AllMiniLmL6V2EmbeddingModel
it worked.
Seems some qualifier management is not done for AllMiniLmL6V2EmbeddingModel bean deployment
Do you know which part of code deploy the AllMiniLmL6V2EmbeddingModel automagically with default qualifier ?
That is in https://github.com/quarkiverse/quarkus-langchain4j/blob/main/core/deployment/src/main/java/io/quarkiverse/langchain4j/deployment/InProcessEmbeddingProcessor.java#L121, we probably need to attach the model name qualifier here
ok , i have a fix, but I already have a fork for another PR, will do a PR after it.
@humcqc @jmartisk IIUC you guys have identified the root cause and have a fix in the works right? Asking so I don't duplicate your work.
Hi @geoand , I have a fix, but I already have a PR for PG Vector. Do you know how I can do 2 PR at same time for this repo ?
Do you know how I can do 2 PR at same time for this repo ?
You certainly can :)
@geoand you're right :) https://github.com/quarkiverse/quarkus-langchain4j/pull/488
In one quarkus application i want to use 2 features:
when i add quarkus-langchain4j-ollama and langchain4j-embeddings-all-minilm-l6-v2 dependecy in my gradle file , it produces an error during intialization: 2 synthetic beans are deploy for EmbeddingModel type:
Is there a way to avoid the automatic deployment of langchain4j-embeddings-all-minilm-l6-v2 EmbeddingModel bean ?