run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.77k stars 5.06k forks source link

[Bug]: SimpleLLMHandler can not import #16285

Open loneystar1983 opened 2 hours ago

loneystar1983 commented 2 hours ago

Bug Description

SimpleLLMHandler._print_llm_event has the code following: from llama_index.core.llms import ChatMessage and ChatMessage has been moved to llama_index.core.base.llms.types

so an error occur: ImportError: cannot import name 'ChatMessage' from 'llama_index.core.llms' (unknown location)

Version

0.11.14

Steps to Reproduce

from llama_index.core.callbacks.global_handlers import set_global_handler

Set the global handler

set_global_handler(eval_mode="simple")

from llama_index.llms.openai import OpenAI

llm = OpenAI(model='gpt-35-turbo-1106', api_key='xxx', api_base='xxxx') llm.complete("how is the weather?")

Relevant Logs/Tracbacks

File d:\Program Files\python-3.9.11\lib\site-packages\llama_index\core\instrumentation\dispatcher.py:265, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
    257 self.span_enter(
    258     id_=id_,
    259     bound_args=bound_args,
   (...)
    262     tags=tags,
    263 )
    264 try:
--> 265     result = func(*args, **kwargs)
    266 except BaseException as e:
    267     self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

File d:\Program Files\python-3.9.11\lib\site-packages\llama_index\core\llms\callbacks.py:485, in llm_completion_callback.<locals>.wrap.<locals>.wrapped_llm_predict(_self, *args, **kwargs)
    483         return wrapped_gen()
    484     else:
--> 485         callback_manager.on_event_end(
    486             CBEventType.LLM,
    487             payload={
    488                 EventPayload.PROMPT: prompt,
    489                 EventPayload.COMPLETION: f_return_val,
    490             },
    491             event_id=event_id,
    492         )
    493         dispatcher.event(
    494             LLMCompletionEndEvent(
    495                 prompt=prompt,
   (...)
    498             )
    499         )
    501 return f_return_val

File d:\Program Files\python-3.9.11\lib\site-packages\llama_index\core\callbacks\base.py:136, in CallbackManager.on_event_end(self, event_type, payload, event_id, **kwargs)
    134 for handler in self.handlers:
    135     if event_type not in handler.event_ends_to_ignore:
--> 136         handler.on_event_end(event_type, payload, event_id=event_id, **kwargs)
    138 if event_type not in LEAF_EVENTS:
    139     # copy the stack trace to prevent conflicts with threads/coroutines
    140     current_trace_stack = global_stack_trace.get().copy()

File d:\Program Files\python-3.9.11\lib\site-packages\llama_index\core\callbacks\simple_llm_handler.py:70, in SimpleLLMHandler.on_event_end(self, event_type, payload, event_id, **kwargs)
     68 """Count the LLM or Embedding tokens as needed."""
     69 if event_type == CBEventType.LLM and payload is not None:
---> 70     self._print_llm_event(payload)

File d:\Program Files\python-3.9.11\lib\site-packages\llama_index\core\callbacks\simple_llm_handler.py:29, in SimpleLLMHandler._print_llm_event(self, payload)
     28 def _print_llm_event(self, payload: dict) -> None:
---> 29     from llama_index.core.llms import ChatMessage
     31     if EventPayload.PROMPT in payload:
     32         prompt = str(payload.get(EventPayload.PROMPT))

ImportError: cannot import name 'ChatMessage' from 'llama_index.core.llms' (unknown location)
dosubot[bot] commented 2 hours ago

Hey @loneystar1983! I'm here to help you with any bugs, questions, or contributions. Let's squash those bugs together!

To resolve the ImportError for ChatMessage in SimpleLLMHandler, you need to update the import statement to reflect the new location of ChatMessage. The correct import statement should be:

from llama_index.core.base.llms.types import ChatMessage

Here's the updated SimpleLLMHandler class with the correct import:

from typing import Any, Dict, List, Optional, cast

from llama_index.legacy.callbacks.base_handler import BaseCallbackHandler
from llama_index.legacy.callbacks.schema import CBEventType, EventPayload

class SimpleLLMHandler(BaseCallbackHandler):
    """Callback handler for printing llms inputs/outputs."""

    def __init__(self) -> None:
        super().__init__(event_starts_to_ignore=[], event_ends_to_ignore=[])

    def start_trace(self, trace_id: Optional[str] = None) -> None:
        return

    def end_trace(
        self,
        trace_id: Optional[str] = None,
        trace_map: Optional[Dict[str, List[str]]] = None,
    ) -> None:
        return

    def _print_llm_event(self, payload: dict) -> None:
        from llama_index.core.base.llms.types import ChatMessage

        if EventPayload.PROMPT in payload:
            prompt = str(payload.get(EventPayload.PROMPT))
            completion = str(payload.get(EventPayload.COMPLETION))

            print(f"** Prompt: **\n{prompt}")
            print("*" * 50)
            print(f"** Completion: **\n{completion}")
            print("*" * 50)
            print("\n")
        elif EventPayload.MESSAGES in payload:
            messages = cast(List[ChatMessage], payload.get(EventPayload.MESSAGES, []))
            messages_str = "\n".join([str(x) for x in messages])
            response = str(payload.get(EventPayload.RESPONSE))

            print(f"** Messages: **\n{messages_str}")
            print("*" * 50)
            print(f"** Response: **\n{response}")
            print("*" * 50)
            print("\n")

    def on_event_start(
        self,
        event_type: CBEventType,
        payload: Optional[Dict[str, Any]] = None,
        event_id: str = "",
        parent_id: str = "",
        **kwargs: Any,
    ) -> str:
        return event_id

    def on_event_end(
        self,
        event_type: CBEventType,
        payload: Optional[Dict[str, Any]] = None,
        event_id: str = "",
        **kwargs: Any,
    ) -> None:
        """Count the LLM or Embedding tokens as needed."""
        if event_type == CBEventType.LLM and payload is not None:
            self._print_llm_event(payload)

This should resolve the import error you are encountering [1][2].

To continue talking to Dosu, mention @dosu.