Open InAnYan opened 1 month ago
Comments on Kernel Memory and/or Semantic Kernel:
However, they still present interesting features like: functions and planning
About LLama index: It's nearly the same langchain. Langchain provides all the tools that LlamaIndex provides (maybe LlamaIndex has better support for RAG)
So, in conclusion about RAG frameworks:
- I didn't liked that it runs as a standalone app. (And I remember I found the java API crappy)
When thinking in micro services (https://12factor.net/ for a short introduction of a variant of it), it is good, that there is no monolith. Think of a research group of 10 researchers sharing their library - and working in an open and collaborative way. Then, it makes sence to run a server. Semantic Kernel "just" needs a docker command to be run. -- This is simlar than our GROBID service... (which still misses a how-to https://github.com/JabRef/user-documentation/issues/495)
While the current implementation with langchain4j works, the limited amount of embedding models available and slow inference (CPU only and ONNX framework by Microsoft) leave doubts, if in future there should be some changes to the existing implementation. If so, we want frameworks to fulfill as many key criterias as possible:
I am still of the opinion that we should make do with llama.cpp.
In my PR of JabRef we implemented RAG manually:
Initially, I didn't thought that there are special and separate RAG frameworks, and I thought langchain provides everything what we need (and it's true).
What I've googled through RAG frameworks: