run-llama / llama-agents

MIT License
1.33k stars 120 forks source link

'coroutine' object has no attribute 'output' #96

Closed HAL9KKK closed 2 weeks ago

HAL9KKK commented 2 weeks ago

I am using LMstudio and I experiment a lot of frustrations. I would advice to publish and example how to run it locally with LMStudio or Ollama

Say that I encountered this error with this code:

from llama_agents import (
    AgentService,
    AgentOrchestrator,
    ControlPlaneServer,
    LocalLauncher,
    SimpleMessageQueue,
)

from llama_index.core.agent import ReActAgent
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai_like import OpenAILike
from llama_index.core import Settings
from llama_index.embeddings.huggingface import HuggingFaceEmbedding

# create an agent
def get_the_secret_fact() -> str:
    """Returns the secret fact."""
    return "The secret fact is: A baby llama is called a 'Cria'."

tool = FunctionTool.from_defaults(fn=get_the_secret_fact)

llm = OpenAILike(
    api_key='pippo',
    base_url='http://localhost:1234/v1'
)

Settings.llm = llm

agent1 = ReActAgent.from_tools([tool], llm=llm)
agent2 = ReActAgent.from_tools([], llm=llm)

Settings.embed_model = HuggingFaceEmbedding(
    model_name="BAAI/bge-small-en-v1.5"
)

# create our multi-agent framework components
message_queue = SimpleMessageQueue(port=8000)
control_plane = ControlPlaneServer(
    message_queue=message_queue,
    orchestrator=AgentOrchestrator(llm=llm),
    port=8001,
)
agent_server_1 = AgentService(
    agent=agent1,
    message_queue=message_queue,
    description="Useful for getting the secret fact.",
    service_name="secret_fact_agent",
    port=8002,
)
agent_server_2 = AgentService(
    agent=agent2,
    message_queue=message_queue,
    description="Useful for getting random dumb facts.",
    service_name="dumb_fact_agent",
    port=8003,
)

# launch it
launcher = LocalLauncher([agent_server_1, agent_server_2], control_plane, message_queue)
result = launcher.launch_single("What is the secret fact?")

print(f"Result: {result}")

this is the result

C:\Users\teiiamu\AppData\Local\Programs\Python\Python311\Lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

 and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin C:\Users\teiiamu\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so
C:\Users\teiiamu\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes\cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
'NoneType' object has no attribute 'cadam32bit_grad_fp32'
CUDA SETUP: Loading binary C:\Users\teiiamu\AppData\Local\Programs\Python\Python311\Lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
INFO:llama_agents.message_queues.simple - Consumer AgentService-dd21919a-2a62-4d23-80ba-15a974ccd8a0: secret_fact_agent has been registered.
INFO:llama_agents.message_queues.simple - Consumer AgentService-253bc191-7c6e-4412-ab27-ad81d3460150: dumb_fact_agent has been registered.
INFO:llama_agents.message_queues.simple - Consumer 02b86867-d3a1-43c6-9375-b7119038d93c: human has been registered.
INFO:llama_agents.message_queues.simple - Consumer ControlPlaneServer-96aae888-5936-4f5f-898f-d504a0736502: control_plane has been registered.
INFO:llama_agents.services.agent - secret_fact_agent launch_local
INFO:llama_agents.services.agent - dumb_fact_agent launch_local
INFO:llama_agents.message_queues.base - Publishing message to 'control_plane' with action 'ActionTypes.NEW_TASK'
INFO:llama_agents.message_queues.simple - Launching message queue locally
C:\Users\teiiamu\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_index\core\llms\llm.py:676: RuntimeWarning: coroutine 'Dispatcher.span.<locals>.async_wrapper' was never awaited
  output = AgentChatResponse(
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
INFO:llama_agents.message_queues.base - Publishing message to 'human' with action 'ActionTypes.COMPLETED_TASK'
INFO:llama_agents.message_queues.simple - Successfully published message 'control_plane' to consumer.
INFO:llama_agents.message_queues.simple - Successfully published message 'human' to consumer.
Result: An error occurred while running the tool: 'coroutine' object has no attribute 'output'
teds-lin commented 2 weeks ago

When I upgraded the llama-index-core version from 0.10.51 to 0.10.52.post1, it resolved issue #93, but the same error as in this issue appeared.

This is my log message


INFO:llama_agents.message_queues.simple - Consumer AgentService-937a8c9d-7164-47e8-acc1-ff5440d1b4c9: dumb_fact_agent has been registered.
INFO:llama_agents.message_queues.simple - Consumer c5892a8a-fa83-447a-950c-fbd3f924d676: human has been registered.
INFO:llama_agents.message_queues.simple - Consumer ControlPlaneServer-1c6e7b50-23c6-48e0-afa8-858e77ef2b9d: control_plane has been registered.
INFO:llama_agents.services.agent - dumb_fact_agent launch_local
INFO:llama_agents.message_queues.base - Publishing message to 'control_plane' with action 'ActionTypes.NEW_TASK'
INFO:llama_agents.message_queues.simple - Launching message queue locally
D:\ProgramData\Miniconda3\envs\test\Lib\site-packages\llama_index\core\llms\llm.py:684: RuntimeWarning: coroutine 'Dispatcher.span.<locals>.async_wrapper' was never awaited
  output = AgentChatResponse(
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
INFO:llama_agents.message_queues.base - Publishing message to 'human' with action 'ActionTypes.COMPLETED_TASK'
INFO:llama_agents.message_queues.simple - Successfully published message 'control_plane' to consumer.
INFO:llama_agents.message_queues.simple - Successfully published message 'human' to consumer.
Result: An error occurred while running the tool: 'coroutine' object has no attribute 'output'
The following are the packages I currently have installed and their corresponding versions packages name version
llama-agents 0.0.4
llama-cloud 0.0.6
llama-index 0.10.52
llama-index-agent-openai 0.2.7
llama-index-cli 0.1.12
llama-index-core 0.10.52.post1
llama-index-embeddings-gemini 0.1.8
llama-index-embeddings-openai 0.1.10
llama-index-indices-managed-llama-cloud 0.2.2
llama-index-legacy 0.9.48
llama-index-llms-gemini 0.1.11
llama-index-llms-openai 0.1.23
llama-index-multi-modal-llms-openai 0.1.6
llama-index-program-openai 0.1.6
llama-index-question-gen-openai 0.1.3
llama-index-readers-file 0.1.25
llama-index-readers-llama-parse 0.1.4
llama-parse 0.4.4
logan-markewich commented 2 weeks ago

Yea that PR wasn't meant to fix this one. I see the issue for this as well, but ngl open-source models suck at being agents, so struggling to confirm if I fixed it yet fully or llama3 just sucks 😁

logan-markewich commented 2 weeks ago

This is fixed in llama-index-core v0.10.52.post2

pip install -U llama-index-core to get it