Closed Ajaypawar02 closed 1 year ago
Answer generated by a 🤖
I understand that you're trying to integrate a websocket with the Human Tool in LangChain, specifically replacing the standard Python input()
function with a websocket input in your user interface.
Here's a potential solution: You can customize the input_func
in the HumanInputChatModel
class to use the websocket for receiving input.
Here's an example of how you might do this using the websockets
library:
import asyncio
import websockets
async def websocket_input(uri):
async with websockets.connect(uri) as websocket:
return await websocket.recv()
# You can modify the input_func in the HumanInputChatModel
class CustomHumanInputChatModel(HumanInputChatModel):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.input_func = websocket_input
In this example, websocket_input
is an asynchronous function that connects to a websocket server at the specified URI and waits for a message to be received. This function is then set as the input_func
in the CustomHumanInputChatModel
class.
Please note that this is a simplified example and might need to be adjusted to fit your specific use case. For instance, you might need to handle exceptions, manage the websocket connection more carefully, or adapt the function to work with your specific websocket server.
This is one potential solution to your issue. If you find it helpful and believe it could be useful to others, I encourage you to make a pull request to update the LangChain documentation. This way, other users facing the same issue can benefit from your experience.
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
Answer generated by a robot
Answer
I understand that you're trying to integrate a websocket with the Human Tool in LangChain, specifically replacing the standard Python
input()
function with a websocket input in your user interface.Here's a potential solution: You can customize the
input_func
in theHumanInputChatModel
class to use the websocket for receiving input.Here's an example of how you might do this using the
websockets
library:import asyncio import websockets async def websocket_input(uri): async with websockets.connect(uri) as websocket: return await websocket.recv() # You can modify the input_func in the HumanInputChatModel class CustomHumanInputChatModel(HumanInputChatModel): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.input_func = websocket_input
In this example,
websocket_input
is an asynchronous function that connects to a websocket server at the specified URI and waits for a message to be received. This function is then set as theinput_func
in theCustomHumanInputChatModel
class.Please note that this is a simplified example and might need to be adjusted to fit your specific use case. For instance, you might need to handle exceptions, manage the websocket connection more carefully, or adapt the function to work with your specific websocket server.
This is one potential solution to your issue. If you find it helpful and believe it could be useful to others, I encourage you to make a pull request to update the LangChain documentation. This way, other users facing the same issue can benefit from your experience.
Sources
* [docs/extras/modules/agents/tools/integrations/human_tools.ipynb](https://github.com/hwchase17/langchain/blob/-/docs/extras/modules/agents/tools/integrations/human_tools.ipynb) * [langchain/chat_models/human.py](https://github.com/hwchase17/langchain/blob/-/langchain/chat_models/human.py)
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a +1 if this is helpful and -1 if it is irrelevant.
This was not helpful
Same problem, anyway to work around this ?
The way I was able to accomplish using the human tool with having to switching between the async and sync code, was using this library. https://github.com/tiangolo/asyncer https://asyncer.tiangolo.com/tutorial/asyncify/
In our code base we're using FastAPI's websockets with an agent, and here is some quickly hacked up code snippets that get this working. 1.) Create custom human tool
from typing import Callable, Optional, Any, Awaitable
from asyncer import syncify
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.tools.base import BaseTool
from starlette.websockets import WebSocket
class CustomHumanTool(BaseTool):
name = "human"
description = (
"You can ask a human for guidance when you think you "
"got stuck or you are not sure what to do next. "
"The input should be a question for the human."
)
prompt_func: Callable[[str, WebSocket], Awaitable[None]]
input_func: Callable[[WebSocket], Awaitable[str]]
websocket: WebSocket
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
synced_prompt_func = syncify(self.prompt_func) # converts async method to sync
synced_input_func = syncify(self.input_func) # converts async method to sync
synced_prompt_func(query, self.websocket)
return synced_input_func(self.websocket)
def _arun(
self,
*args: Any,
**kwargs: Any,
) -> Any:
raise NotImplementedError()
2.) Convert the synchronous method that calls the agent into async method
async def prompt_user(prompt: str, websocket: WebSocket):
await websocket.send_json({'prompt:' prompt})
async def input_answer(websocket: WebSocket):
response = None
while response is None and websocket.application_state == WebSocketState.CONNECTED:
await anyio.sleep(2) # I believe this should prevent blocking the loop while we wait for a response
response = await websocket.receive_json()
return response
def get_answer(prompt: str, websocket: WebSocket):
llm = ChatOpenAI(
temperature=0,
openai_api_key=settings.openai_api_key.get_secret_value(),
model_name="gpt-3.5-turbo-16k",
)
tools = [CustomHumanTool(prompt_func=prompt_user, input_func=input_answer, websocket=websocket)]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
return_intermediate_steps=True,
)
return agent({ 'input': prompt})
# This is the entry point
# You need to first asyncify the method that runs the agent for it to run under an anyio workert thread,
# otherwise syncify in the CustomHumanTool will fail
async def run_prompt(prompt: str, websocket: WebSocket):
async_get_answer = asyncify(get_answer)
return await async_get_answer(prompt, websocket)
This still hasn't been used in a production environment so please use with care. Maybe someone with more experience with python async/sync can chime in and provide any insight into whether this can lead to any issues with the event loop. I'm still fairly new to python and it's async/threading libraries.
Hi, @Ajaypawar02! I'm Dosu, and I'm helping the LangChain team manage our backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, the issue is about using the Human Tool with websocket and finding a solution for using async with the tool. A potential solution has been provided by me to customize the input_func
in the HumanInputChatModel
class to use the websocket for receiving input. Another user, wolvever
, asks if there is any workaround for the same problem. rozimek90
suggests using the asyncer
library to convert synchronous code to asynchronous code and provides code snippets for using the Human Tool with FastAPI's websockets.
Now, we would like to know if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to the LangChain project!
Issue you'd like to raise.
How can i use Human Tool along with the websocket. As we cannot use async with tool.Can anyone please provide the solution for this.
def please_take_test(): score = 0 for i, data in enumerate(questions): print(ask_for_info(data["question"], data["options"])) provided_answer = str(input()) score += check_answer(provided_answer, data["answer"])
first_prompt = ChatPromptTemplate.from_template( """You are the assisant to greet the user on the basis of score
info_gathering_chain = LLMChain(llm=llm, prompt=first_prompt) ai_chat = info_gathering_chain.run(score = score) return ai_chat
Here I need this input() should be directly taken from the user interface
Suggestion:
No response