langchain-ai / chat-langchain

https://chat.langchain.com
MIT License
4.97k stars 1.16k forks source link

🦜️🔗 Chat LangChain

This repo is an implementation of a chatbot specifically focused on question answering over the LangChain documentation. Built with LangChain, LangGraph, and Next.js.

Deployed version: chat.langchain.com

Looking for the JS version? Click here.

The app leverages LangChain and LangGraph's streaming support and async API to update the page in real time for multiple users.

Running locally

To run the app end-to-end locally, please use the code and documentation from this branch.

[!NOTE] Fully local implementation here does not currently support "Previous chats" functionality.

📚 Technical description

There are two components: ingestion and question-answering.

Ingestion has the following steps:

  1. Pull html from documentation site as well as the Github Codebase
  2. Load html with LangChain's RecursiveURLLoader and SitemapLoader
  3. Split documents with LangChain's RecursiveCharacterTextSplitter
  4. Create a vectorstore of embeddings, using LangChain's Weaviate vectorstore wrapper (with OpenAI's embeddings).

Question-Answering has the following steps:

  1. Given the chat history and new user input, determine what a standalone question would be using an LLM.
  2. Given that standalone question, look up relevant documents from the vectorstore.
  3. Pass the standalone question and relevant documents to the model to generate and stream the final answer.
  4. Generate a trace URL for the current chat session, as well as the endpoint to collect feedback.

Documentation

Looking to use or modify this Use Case Accelerant for your own needs? We've added a few docs to aid with this: