Open shubham-dayma opened 5 months ago
π€
Hello @shubham-dayma! π I'm Dosu, a friendly bot here to help while a human maintainer gets to your issue. I'm great at solving bugs, answering questions, and helping folks like you become contributors.
Consider me a friendly helper while we wait for a human touch. I'm currently diving into the issue you've posted and will have a full response for you shortly. Hang tight! π
ollama pull dolphin-phi
my friend ;)
@StanGirard Thanks for looking into this. I already had dolphin-phi
in ollama. I tried checking ollama logs and it ended up that, while initializing langchain.embeddings.ollama.OllamaEmbeddings
we are not passing param model
and by default this is considering model as llama2
. I changed this code as below and it worked.
@property def embeddings(self): if self.brain_settings.ollama_api_base_url: return OllamaEmbeddings( base_url=self.brain_settings.ollama_api_baseurl, model=self.model.replace('ollama/', "")_ ) # pyright: ignore reportPrivateUsage=none else: return OpenAIEmbeddings()
Now I am wondering how should I get brain's model in backend/models/settings.py and update the initializing of OllamaEmbeddings
here.
π€
Based on the context provided, it seems like you've already found a solution to the issue by modifying the OllamaEmbeddings
initialization in quivr_rag.py
. To apply a similar fix in backend/models/settings.py
, you would need to have access to the model name in the get_embeddings
function.
One way to achieve this could be to modify the BrainSettings
class to include the model name, and then use this when initializing OllamaEmbeddings
. Here's an example of how you could do this:
class BrainSettings(BaseSettings):
...
model: str = "llama2" # Add this line
class Config:
env_file = ".env"
def get_embeddings():
settings = BrainSettings()
if settings.ollama_api_base_url:
embeddings = OllamaEmbeddings(
base_url=settings.ollama_api_base_url,
model=settings.model # Use the model from settings here
)
else:
embeddings = OpenAIEmbeddings()
return embeddings
This solution assumes that the OllamaEmbeddings
class accepts a model
parameter during initialization. If it doesn't, you might need to modify the OllamaEmbeddings
class to accept and use this parameter.
Please note that this is a potential solution based on the information provided and the context of the Quivr repository. It might not be the best or the only solution. If you have any further questions or need more help, feel free to ask.
For more information, you can refer to the Quivr repository and the FastAPI documentation on settings management.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@StanGirard Thanks for looking into this. I already had
dolphin-phi
in ollama. I tried checking ollama logs and it ended up that, while initializinglangchain.embeddings.ollama.OllamaEmbeddings
we are not passing parammodel
and by default this is considering model asllama2
. I changed this code as below and it worked.@Property def embeddings(self): if self.brain_settings.ollama_api_base_url: return OllamaEmbeddings( base_url=self.brain_settings.ollama_api_baseurl, model=self.model.replace('ollama/', "")_ ) # pyright: ignore reportPrivateUsage=none else: return OpenAIEmbeddings()
Now I am wondering how should I get brain's model in backend/models/settings.py and update the initializing of
OllamaEmbeddings
here.
embeddings = OllamaEmbeddings(model="XXX")
make it your own model, like "qwen:7b"γβllama2β and so on, that will be solved
Exactly!
On Sun, Mar 17, 2024 at 21:20:51, Wills < @.*** > wrote:
@ StanGirard ( https://github.com/StanGirard ) Thanks for looking into this. I already had dolphin-phi in ollama. I tried checking ollama logs and it ended up that, while initializing langchain.embeddings.ollama.OllamaEmbeddings we are not passing param model and by default this is considering model as llama2. I changed this code ( https://github.com/StanGirard/quivr/blob/31ce6c05660db8088da96f7bd6f5a0f5f60ffaf7/backend/llm/rags/quivr_rag.py#L67 ) as below and it worked.
@ Property ( https://github.com/Property ) def embeddings(self): if self.brain_settings.ollama_api_base_url: return OllamaEmbeddings( base_url=self.brain_settings.ollama_api_base_url, model=self.model.replace('ollama/', "") ) # pyright: ignore reportPrivateUsage=none else: return OpenAIEmbeddings()
Now I am wondering how should I get brain's model in backend/ models/ settings. py ( http://backend/models/settings.py ) and update the initializing of OllamaEmbeddings here ( https://github.com/StanGirard/quivr/blob/31ce6c05660db8088da96f7bd6f5a0f5f60ffaf7/backend/models/settings.py#L133 ).
embeddings = OllamaEmbeddings(model="XXX")
make it your own model, like "qwen:7b"γβllama2β and so on, that will be solved
β Reply to this email directly, view it on GitHub ( https://github.com/QuivrHQ/quivr/issues/2056#issuecomment-2002901400 ) , or unsubscribe ( https://github.com/notifications/unsubscribe-auth/AEVUW3DBZREPV5C7J6UV65LYYZTSHAVCNFSM6AAAAABCFEO5WWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBSHEYDCNBQGA ). You are receiving this because you were mentioned. Message ID: <QuivrHQ/quivr/issues/2056/2002901400 @ github. com>
Exactly! β¦ On Sun, Mar 17, 2024 at 21:20:51, Wills < @.** > wrote: > > > @ StanGirard ( https://github.com/StanGirard ) Thanks for looking into > this. I already had dolphin-phi in ollama. I tried checking ollama logs and > it ended up that, while initializing langchain.embeddings.ollama.OllamaEmbeddings > we are not passing param model and by default this is considering model as > llama2. I changed this code ( > https://github.com/StanGirard/quivr/blob/31ce6c05660db8088da96f7bd6f5a0f5f60ffaf7/backend/llm/rags/quivr_rag.py#L67 > ) as below and it worked. > > >> >> >> @ Property ( https://github.com/Property ) >> def embeddings(self): >> if self.brain_settings.ollama_api_base_url: >> return OllamaEmbeddings( >> base_url=self.brain_settings.ollama_api_base_url, >> model=self.model.replace('ollama/', "")* >> ) # pyright: ignore reportPrivateUsage=none >> else: >> return OpenAIEmbeddings() >> >> > > > > Now I am wondering how should I get brain's model in backend/ models/ settings. > py ( http://backend/models/settings.py ) and update the initializing of OllamaEmbeddings > here ( > https://github.com/StanGirard/quivr/blob/31ce6c05660db8088da96f7bd6f5a0f5f60ffaf7/backend/models/settings.py#L133 > ). > > embeddings = OllamaEmbeddings(model="XXX") make it your own model, like "qwen:7b"γβllama2β and so on, that will be solved β Reply to this email directly, view it on GitHub ( #2056 (comment) ) , or unsubscribe ( https://github.com/notifications/unsubscribe-auth/AEVUW3DBZREPV5C7J6UV65LYYZTSHAVCNFSM6AAAAABCFEO5WWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBSHEYDCNBQGA ). You are receiving this because you were mentioned. Message ID: <QuivrHQ/quivr/issues/2056/2002901400 @ github. com>
How can I set up a local embedded model if the latest version does not have this file?
come up with the fix #2690
What happened?
I am trying to run
ollama/dolphin-phi
model on ollama but /chat/{chat_id}/question throws{"error":"model 'llama2' not found, try pulling it first"}
error. I don't want to loadllama2
on my ollama due to less resources available on my system.Relevant log output
Twitter / LinkedIn details
No response