cohere-ai / notebooks

Code examples and jupyter notebooks for the Cohere Platform
MIT License
451 stars 120 forks source link

adding multi purpose agentic RAG example #194

Closed co-jason closed 2 months ago

CLAassistant commented 3 months ago

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

justin-lee-cohere commented 3 months ago

@co-jason As we discussed, I think it'd be nice to show readers the power of scaling tool use broadly. I propose the below edit to the Motivation section. Let me know your thoughts!

Tool use enhances the capabilities of LLMs by enabling them to offload tasks that are ill-suited for next-token-prediction language modeling. Tools allow models to (indirectly) perform mathematical or other deterministic operations, run code, and search a wide array of data sources. In this notebook, we demonstrate that that Command R+ and LangChain can be used to extend this paradigm quite far, to encompass diverse use cases that rely on information retrieval, programming, and human input.

In an enterprise setting, information is often distributed across a wide range of knowledge bases - for instance: cloud-based document repositories such as Notion, Jira, Google Drive, or Microsoft SharePoint; chat logs from Slack, Microsoft Teams and others; or meeting transcripts and internal documents. By building out bespoke tools for each knowledge base, we allow Command R+ to access whatever sources of information are needed for a use case.

Given these results, we can then allow Command R+ to determine if the information retrieved is sufficient to answer the query, or if a web search is needed. Here, this is done via a modified version of the Correctness evaluator template from LlamaIndex. If the score is low, Command R+ asks the user for permission to do a web search to find the answer.