langflow-ai / langflow

⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
http://www.langflow.org
MIT License
18.19k stars 2.73k forks source link

Ollama error 404 #1819

Open AlessioGesuelliUnicam opened 3 weeks ago

AlessioGesuelliUnicam commented 3 weeks ago

Discussed in https://github.com/langflow-ai/langflow/discussions/1710

Originally posted by **AlessioGesuelliUnicam** April 16, 2024 I don't know how resolve this problem Ollama is running but langflow return Error 404 ValueError: Error: Ollama call failed with status code 404. Maybe your model is not found and you should pull the model with `ollama pull llama2:latest`. - ERROR - chat - chat.py:57 Error in chat websocket: control frame too long ![Acquisizione schermata 16 04 2024 alle 16 32 37 1](https://github.com/langflow-ai/langflow/assets/105433120/8863560f-3bfd-408e-820c-d661bbe216ee) ![Acquisizione schermata 16 04 2024 alle 16 32 37 2](https://github.com/langflow-ai/langflow/assets/105433120/a824ed5a-306d-4cf9-9b0e-6173a76a11f5) ![Acquisizione schermata 16 04 2024 alle 16 32 57 1](https://github.com/langflow-ai/langflow/assets/105433120/9cee97b6-dd27-496c-9a31-6680e5670198)
dosubot[bot] commented 3 weeks ago

To resolve the Error 404 with Ollama in Langflow, ensure the base_url for the Ollama API in Langflow's configuration matches the actual endpoint where your Ollama API is hosted. If your Ollama API is not at the default "http://localhost:11434", update the base_url in the ChatOllamaComponent configuration to the correct URL. This step is crucial for Langflow to communicate with Ollama.

For the "control frame too long" error in your websocket connection, this issue arises when the websocket frame size exceeds the protocol or server's limit. To fix this, adjust the max_size parameter in your websocket server or client configuration to accommodate larger frames. Here’s an example using the websockets library in Python:

import websockets

# For a websocket server
start_server = websockets.serve(your_handler, "localhost", 6789, max_size=YourDesiredMaxSize)

# For a websocket client
async with websockets.connect('ws://yourserver:port', max_size=YourDesiredMaxSize) as websocket:
    # Your client code here

Replace YourDesiredMaxSize with a value that suits your application's needs, keeping in mind the balance between frame size and memory usage.

#### Details **Sources** - [src/backend/langflow/components/llms/ChatOllamaEndpoint.py](https://github.com/logspace-ai/langflow/blob/dev/src/backend/langflow/components/llms/ChatOllamaEndpoint.py) - [src/backend/base/langflow/components/model_specs/ChatOllamaEndpointSpecs.py](https://github.com/logspace-ai/langflow/blob/dev/src/backend/base/langflow/components/model_specs/ChatOllamaEndpointSpecs.py) - [src/backend/langflow/api/v1/chat.py](https://github.com/logspace-ai/langflow/blob/dev/src/backend/langflow/api/v1/chat.py) - [src/backend/langflow/services/chat/service.py](https://github.com/logspace-ai/langflow/blob/dev/src/backend/langflow/services/chat/service.py) - [src/backend/base/langflow/base/io/chat.py](https://github.com/logspace-ai/langflow/blob/dev/src/backend/base/langflow/base/io/chat.py) - [src/backend/langflow/components/llms/OllamaLLM.py](https://github.com/logspace-ai/langflow/blob/dev/src/backend/langflow/components/llms/OllamaLLM.py) - [src/backend/base/langflow/components/model_specs/OllamaLLMSpecs.py](https://github.com/logspace-ai/langflow/blob/dev/src/backend/base/langflow/components/model_specs/OllamaLLMSpecs.py) - [src/backend/base/langflow/services/socket/utils.py](https://github.com/logspace-ai/langflow/blob/dev/src/backend/base/langflow/services/socket/utils.py) - [src/backend/base/langflow/services/socket/service.py](https://github.com/logspace-ai/langflow/blob/dev/src/backend/base/langflow/services/socket/service.py)
**About Dosu** This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.