langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
89.78k stars 14.19k forks source link

Issue: I'd like to use Hugging Face's Chat UI frontend with LangChain. #14207

Closed haje01 closed 4 months ago

haje01 commented 7 months ago

Issue you'd like to raise.

I'd like to use Hugging Face's Chat UI frontend with LangChain.

https://github.com/huggingface/chat-ui

But it looks like the Chat UI is only available through Hugginf Face's Text Generation Inference endpoint.

https://github.com/huggingface/chat-ui/issues/466

How can I serve the chain I have configured with LangChain in TGI format so I can use Chat UI?

Thank you in advance.

Suggestion:

No response

dosubot[bot] commented 7 months ago

🤖

Hello @haje01! 👋

I'm Dosu, an assistant here to help you with your LangChain related questions and issues while we wait for a human maintainer. I'm here to make your development process easier and smoother. Let's get started!

To serve the chain you have configured with LangChain in Text Generation Inference format to use with Hugging Face's Chat UI frontend, you need to replace the NotImplemented in the add_routes function with your chain.

Here is an example of how you can do it:

from fastapi import FastAPI
from fastapi.responses import RedirectResponse
from langserve import add_routes
from langchain import TextGenerationInferenceChain

app = FastAPI()

@app.get("/")
async def redirect_root_to_docs():
    return RedirectResponse("/docs")

# Create your chain
chain = TextGenerationInferenceChain()

# Add your chain to the routes
add_routes(app, chain)

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="0.0.0.0", port=8000)

In this example, TextGenerationInferenceChain() is your configured chain. You need to replace it with your actual chain. This will add the necessary routes to your FastAPI application to serve your chain. Then, you can use this server with Hugging Face's Chat UI frontend.

You can find more details about this in the LangChain repository.

As for the TextGenerationInferenceChain class, it is a subclass of the LLM base class in the LangChain codebase, designed to interact with the HuggingFace text generation API. It has several attributes and methods that control the behavior of the text generation. You can find more details about this class in the LangChain repository.

I hope this helps! If you have any other questions, feel free to ask.

-Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

rahermur commented 3 months ago

I also find this functionality very useful. We are pushing for having an integration between HuggingFace ChatUI and endpoints built using LangServe. But I think this is something that needs to be contributed to HuggingFace project instead of here. We already open a MR for this: https://github.com/huggingface/chat-ui/pull/1009 take a look if you are interested on this