Closed ZG-jian closed 1 month ago
@ZG-jian Hello there! I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this together!
I found a similar open issue and discussion that might help you resolve the problem:
Open Issue: [Bug]: Not able to make call to Ollama multimodal model in the cookbook
base_url
property of the Ollama
object. Here is the code snippet used:
model = Ollama(model="gemma")
model.base_url = f'http://{os.getenv("OLLAMA_HOST")}'
print(model.base_url)
Similar Discussion: OpenAILike & Ollama issues
To resolve your issue, ensure that you have followed these steps:
Install the required package:
pip install llama-index-llms-ollama
Download and install Ollama: Visit Ollama's website to download and install the software.
Start the Ollama server:
ollama serve
Download the Llama3 model:
ollama pull llama3
Use the Ollama
class in your code:
Ensure that you are correctly initializing the Ollama
class with the appropriate parameters. Here is an example:
from llama_index.llms.ollama import Ollama
from llama_index.core.llms import ChatMessage
llm = Ollama(model="wangshenzhi/llama3.1_8b_chinese_chat", request_timeout=300.0)
messages = [
ChatMessage(role="system", content="You are a pirate with a colorful personality"),
ChatMessage(role="user", content="What is your name"),
]
resp = llm.chat(messages)
print(resp)
Make sure that the base_url
is set to the correct address where the Ollama server is running (default is http://localhost:11434
). If you have followed these steps and still face issues, verify that the server is running and accessible at the specified base_url
.
Additionally, you can refer to the test cases provided in the llama-index-integrations/llms/llama-index-llms-ollama/tests/test_llms_ollama.py
file to ensure your setup is correct. Here is an example of a test case for synchronous completion:
def test_ollama_complete() -> None:
llm = Ollama(model="llama3")
response = llm.complete("Hello!")
assert response is not None
assert str(response).strip() != ""
For more detailed instructions and examples, you can refer to the Using LlamaIndex with Local Models guide and the Starter Tutorial (Local Models) [1][2].
To continue talking to Dosu, mention @dosu.
Thank you, I have resolved the issue and your response was very timely and excellent.Have a good day!
Bug Description
Unable to link to the local model according to the document link `from llama_index.llms.ollama import Ollama from llama_index.core.llms import ChatMessage
llm = Ollama(model="wangshenzhi/llama3.1_8b_chinese_chat", request_timeout=300.0)
resp = llm.complete("Who is Paul Graham?")
print(resp)
messages = [ ChatMessage( role="system", content="You are a pirate with a colorful personality" ), ChatMessage(role="user", content="What is your name"), ] resp = llm.chat(messages) print(resp)`
curl is OK
Version
Latest version
Steps to Reproduce
refer to https://docs.llamaindex.ai/en/stable/examples/llm/ollama/
Relevant Logs/Tracbacks
No response