Chainlit / literalai-python

https://docs.getliteral.ai/
Apache License 2.0
12 stars 0 forks source link

Instrumentation LlamaIndex #122

Closed amait41 closed 1 month ago

amait41 commented 1 month ago

Bonjour,

Je rencontre quelques difficultés avec l'instrumentation LlamaIndex.

On observe que l'instrumentation fonctionne avec des query engines mais pas avec certains autres appels. Exemple : agent.chat, lm.predict_and_call, etc.

De plus, certains threads produisent des erreurs. image

Avec un message suivant dans la console : console _error.txt

Environnement python=3.12 literalai==0.0.622 llama-index==0.11.10

Voici un exemple de code dont je souhaite logger les steps :

from llama_index.core.llms import ChatMessage
from llama_index.core.tools import FunctionTool
from literalai import LiteralClient

lc = LiteralClient(
    api_key=LITERAL_API_KEY
)

lc.instrument_llamaindex()

def multiply(a: float, b: float) -> float:
    """Multiply two numbers and returns the product"""
    return a * b

def add(a: float, b: float) -> float:
    """Add two numbers and returns the sum"""
    return a + b

tools = [
    FunctionTool.from_defaults(fn=multiply),
    FunctionTool.from_defaults(fn=add)
]

query = "What is 20+(2*4)? Use a tool to calculate every step."
chat_history = [
    ChatMessage(
        role="user",
        content=query)
]

with lc.thread(name="test instru") as thread:
    lc.message(chat_history[-1].content)
    with lc.run(thread_id=thread.id) as run:
        response = Settings.llm.predict_and_call(
            tools,
            chat_history=chat_history,
            error_on_no_tool_call=False,
        )

    lc.message(str(response))

Bien cordialement, Adrien

desaxce commented 1 month ago

Merci @amait41 pour ton retour, je regarde ça.

willydouhard commented 1 month ago

Can you provide the thread json that is causing this issue? You can find it in the network tab of the chrome dev tool. Look for a graphql query named threadDetail. That would be helpful to reproduce.

amait41 commented 1 month ago

Thanks for your help, here is the file : graphql_query.txt Let me know if I didn't provide the correct file :)

willydouhard commented 1 month ago

This is great, thank you!

After inspecting the thread, we can see that there is a circular dependency between step c693d870-5cdc-4bfc-b14a-2766b5c8a7a8 of type retrieval and step d09ce25c-7e52-580f-8820-a9a97e2cb17c of type run.

This should never happen. Is this happening using only the llama index instrumentation?

In the next release of the platform the UI will gracefully handle this edge case but we need to also find the root cause.

desaxce commented 1 month ago

In the next release of the platform the UI will gracefully handle this edge case but we need to also find the root cause.

I managed to reproduce circular dependencies when with a call to engine.query() when two calls to literalai_client.instrument_llamaindex() were made prior. Enforced single single instrumentation in this PR.

desaxce commented 1 month ago

@amait41 Nous venons de sortir:

Pour ce dernier point, l'erreur venait d'une double instrumentation de LlamaIndex, chose qui est désormais infaisable avec la dernière version de la SDK.