run-llama / llama-agents

MIT License
1.08k stars 80 forks source link

Using Local Models #87

Closed HAL9KKK closed 1 hour ago

HAL9KKK commented 1 week ago

How to use local models with LMStudio or Ollama?

I have used LMStudio with OpenAILike but it always search the OPENAI API key

from llama_agents import (
    AgentService,
    AgentOrchestrator,
    ControlPlaneServer,
    SimpleMessageQueue,
)

from llama_index.core.agent import ReActAgent
from llama_index.core.tools import FunctionTool
# from llama_index.llms.openai import OpenAIlike
from llama_index.llms.openai_like import OpenAILike

# create an agent
def get_the_secret_fact() -> str:
    """Returns the secret fact."""
    return "The secret fact is: A baby llama is called a 'Cria'."

tool = FunctionTool.from_defaults(fn=get_the_secret_fact)

client = OpenAILike(
    api_key='pippo',
    base_url='http://localhost:1234/v1'
)

agent1 = ReActAgent.from_tools([tool], llm=client)
agent2 = ReActAgent.from_tools([], llm=client)

# create our multi-agent framework components
message_queue = SimpleMessageQueue(port=8000)
control_plane = ControlPlaneServer(
    message_queue=message_queue,
    orchestrator=AgentOrchestrator(llm=client),
    port=8001,
)
agent_server_1 = AgentService(
    agent=agent1,
    message_queue=message_queue,
    description="Useful for getting the secret fact.",
    service_name="secret_fact_agent",
    port=8002,
)
agent_server_2 = AgentService(
    agent=agent2,
    message_queue=message_queue,
    description="Useful for getting random dumb facts.",
    service_name="dumb_fact_agent",
    port=8003,
)

It returns the following error

Traceback (most recent call last):
  File "C:\Users\teiiamu\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_index\core\embeddings\utils.py", line 59, in resolve_embed_model       
    validate_openai_api_key(embed_model.api_key)
  File "C:\Users\teiiamu\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_index\embeddings\openai\utils.py", line 103, in validate_openai_api_key
    raise ValueError(MISSING_API_KEY_ERROR_MESSAGE)
ValueError: No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
styk-tv commented 1 week ago

I'm not an expert but try to use ReActAgentWorker like

from package pip install llama-index-llms-ollama

from llama_index.llms.ollama import Ollama
from llama_index.core.agent import ReActAgentWorker
llm_ollama = Ollama(model="llama3", request_timeout=120.0, json_mode=True, verbose=True)

worker2 = ReActAgentWorker.from_tools([your_tool_here], llm=llm_ollama, verbose=True)

seems to be going in the right direction, i need to figure out finalizer

logan-markewich commented 1 week ago

Ah, this is hitting embeddings, probably from this line

If there are more services than a threshold, then embeddings are used to narrow down choices for the agent orchestrator.

Probably this should be both lazily initialized, and also allow the user to pass in some settings here

For now, the fix would be setting the embed_model on settings

from llama_index.core import Settings

Settings.embed_model = ...