Marker-Inc-Korea / RAGchain

Extension of Langchain for RAG. Easy benchmarking, multiple retrievals, reranker, time-aware RAG, and so on...
Apache License 2.0
279 stars 27 forks source link

Implement Langchain's LLM LCEL feature to LLMs. #300

Closed vkehfdl1 closed 11 months ago

vkehfdl1 commented 1 year ago

We now use official openai library only, and use external service like vLLM or LocalAI for running custom models. But it is kind of hard to implement for beggingers. Plus, many services is hard to use with openai library. (Like PaLM API, huggingface inference endpoints) So, I think just compatible with Langchain is better. (Also, I think langchain's new LCEL is kind of cool)

vkehfdl1 commented 11 months ago

great blog

I think we can implement our retrieval to LCEL using Passage => text as easily. Like, extract contents from List of Passages.

Then, you can use your own PromptTemplate and RunnableMap for using retrievals. The pipeline can be really simple.

Pipeline will be the same when we change to LCEL, because pipeline's inside will change, but outside usage must be same. Because pipeline is like a cookbook for RAG workflows