Open iocron opened 5 months ago
Hi @iocron , it seems you are using some custom implementation of assistant, did you experience the same behaviour with lingoose's assistant? Could you provide an example in such way I can try to replicate?
Hi @henomis the custom assistant implementation is only a wrapper/helper function for creating a new lingoose ollama assistant faster/easier (by using a options type). I've created a slim version of my code to provide a simple replication of the issue, hope that helps: https://github.com/iocron/lingoose-issue-208
btw. thanks for creating such a good library / lingoose, really appreciate the work :)
Describe the bug LLM produces no answer, sometimes weird answers, and other times it seems like the system message gets ignored fully when using the combination of ollama with RAG + custom system message. It generates a thread with 2 system messages (one from the RAG implementation). Maybe the system messages get mixed up somehow (I haven't gone through the lingoose implementation yet to check).
Another observed problem is that using .WithModel() on a embedder for RAG is not working, because it retrieves only the first document. By removing .WithModel() the retrieval works as expected.
To Reproduce
Expected behavior There should be at least some answer/output generated, but it's most of the time empty while using a RAG with a own system message. Sometimes it produces partly random output. Here is the Thread I get from it (I also had rarely situations where it gave me a somewhat correct answer):
Desktop (please complete the following information):
Additional information It would be nice to be able to override/set/clear the system message(s) in general, instead of only being able to add new ones.