run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.9k stars 5.28k forks source link

[Bug]: llm.complete getting failed with Value error: Chat response object has no field usage while using ollama #17035

Closed saimaheshkrishna closed 5 hours ago

saimaheshkrishna commented 6 hours ago

Bug Description

from llama_index.llms.ollama import Ollama

llm = Ollama(model="llama2", request_timeout=60.0)

response = llm.complete("What is the capital of France?") print(response)

Above code is giving value error

Version

0.12.1

Steps to Reproduce

Install latest llamaindex and llamaindex llms ollama package and run below code from llama_index.llms.ollama import Ollama

llm = Ollama(model="llama2", request_timeout=60.0)

response = llm.complete("What is the capital of France?") print(response)

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 6 hours ago

Hello @saimaheshkrishna! I'm here to help you with any bugs, questions, or contributions. Let's work together to solve the issue you're facing.

I couldn't find a specific solution to the issue you're facing, but there is a similar unsolved discussion that might be related: OpenAILike & Ollama issues [1].

For your specific problem, ensure that you have the correct setup and configuration for using the Ollama model with LlamaIndex. Here are some key points to check:

  1. Installation: Make sure you have installed the required packages:

    pip install llama-index-llms-ollama
    pip install llama-index
  2. Ollama Setup: Ensure the Ollama app is running on your local machine, serving models on localhost:11434, and that you have ollama>=0.1.26 installed.

  3. Model Configuration: When creating the Ollama instance, specify the model correctly. If you're using a specific version, ensure it's available.

  4. JSON Mode and Structured Outputs: If structured JSON output is needed, enable JSON mode by setting json_mode=True and attach a Pydantic class if required.

  5. Error Handling: Check if there are any specific error handling or exception management practices you might need to implement for the llm.complete method.

If these steps don't resolve the issue, it might be worth checking for any updates or bug fixes related to the Ollama model in the LlamaIndex repository, as there might be recent changes that could affect its functionality.

To continue talking to Dosu, mention @dosu.

saimaheshkrishna commented 6 hours ago

Downgrading ollama package to 0.3.3 works and ollama with 0.4.0 breaks the code