Closed vkehfdl1 closed 11 months ago
I think we can implement our retrieval to LCEL using Passage => text as easily. Like, extract contents from List of Passages.
Then, you can use your own PromptTemplate and RunnableMap for using retrievals. The pipeline can be really simple.
Pipeline will be the same when we change to LCEL, because pipeline's inside will change, but outside usage must be same. Because pipeline is like a cookbook for RAG workflows
We now use official openai library only, and use external service like vLLM or LocalAI for running custom models. But it is kind of hard to implement for beggingers. Plus, many services is hard to use with openai library. (Like PaLM API, huggingface inference endpoints) So, I think just compatible with Langchain is better. (Also, I think langchain's new LCEL is kind of cool)