langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
92.49k stars 14.8k forks source link

Configurable timeout for Ollama #13622

Closed shreyaskarnik closed 10 months ago

shreyaskarnik commented 10 months ago

Feature request

I have been using Ollama with Langchain for various tasks, but sometimes Ollama takes too long to respond depending on my local hardware. Is it possible to add a configurable timeout to the Ollama base class so that I can adjust this setting to avoid timeouts when using agents. Currently, I am getting a httpx timeout error when using Ollama.

Motivation

This feature will help to leverage local LLMs on a variety of hardware and let's experiment and build with local LLMs before using any third party APIs.

Your contribution

If this is something that would be considered as a feature I am happy to add in a PR for this feature.

dosubot[bot] commented 10 months ago

🤖

Hello @shreyaskarnik,

Nice to meet you! I'm Dosu, an AI bot here to assist you with any questions, bugs, or guidance if you're looking to contribute. While we wait for a human maintainer to join us, I'll be doing my best to help you with your issue.

I'm reviewing your issue regarding the configurable timeout for Ollama in LangChain and will provide a full response shortly. Hang tight!

dosubot[bot] commented 10 months ago

🤖

Thank you for your interest in contributing to LangChain! Your proposed feature of adding a configurable timeout to the Ollama base class sounds like a valuable addition. It would definitely help users leverage local LLMs on different hardware and avoid timeouts when using agents.

I encourage you to go ahead and create a pull request with your proposed feature. The maintainers will review it and decide if it should be merged into the LangChain framework. Your contribution will be greatly appreciated!


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

shreyaskarnik commented 10 months ago

Actually this was not an error with Ollama timeout (requests waits as expected to get the response) there was an error with the duckduck-go wrapper when I was trying to tweak https://gist.github.com/hwchase17/69a8cdef9b01760c244324339ab64f0c to use Ollama.