Closed HAL9KKK closed 1 hour ago
I'm not an expert but try to use ReActAgentWorker like
from package pip install llama-index-llms-ollama
from llama_index.llms.ollama import Ollama
from llama_index.core.agent import ReActAgentWorker
llm_ollama = Ollama(model="llama3", request_timeout=120.0, json_mode=True, verbose=True)
worker2 = ReActAgentWorker.from_tools([your_tool_here], llm=llm_ollama, verbose=True)
seems to be going in the right direction, i need to figure out finalizer
Ah, this is hitting embeddings, probably from this line
If there are more services than a threshold, then embeddings are used to narrow down choices for the agent orchestrator.
Probably this should be both lazily initialized, and also allow the user to pass in some settings here
For now, the fix would be setting the embed_model
on settings
from llama_index.core import Settings
Settings.embed_model = ...
How to use local models with LMStudio or Ollama?
I have used LMStudio with OpenAILike but it always search the OPENAI API key
It returns the following error