Open bukosabino opened 2 weeks ago
Hey @bukosabino , thanks for reaching out. can you confirm that it works if you passed a dummy ID to "userId". We will make a fix to this.
Hey @bukosabino!
both span_id
& trace_id
are only exposed if it's used within @with_langtrace_root_span() decorator
check User Feedback Docs for more context
can you try out
@APP.get("/qa_feedback")
@with_langtrace_root_span()
async def qa_feedback(span_id: str, trace_id: str, user_score: int):
data = {
"spanId": span_id, "traceId": trace_id, "userScore": user_score, "userId": None
}
SendUserFeedback().evaluate(data=data)
return {"feedback": "OK"}
let me know if it fixes the issue!
Hi @alizenhom and @karthikscale3 ,
Thanks for your answers!
I have included the with_langtrace_root_span()
decorator and I can see the trace on the dashboard.
However, the trace is empty. Also, I was expecting to see the feedback trace linked to the LLM trace. Is this the expected behavior?
My code:
@APP.get("/qa_feedback")
@with_langtrace_root_span("Feedback")
async def qa_feedback(span_id: str, trace_id: str, user_score: int):
data = {
"spanId": span_id, "traceId": trace_id, "userScore": user_score, "userId": None
}
SendUserFeedback().evaluate(data=data)
return {"feedback": "OK"}
Hey @bukosabino!
Yes, it's expected that the feedback do not get traced as it's an external call.
But you have a good point, it would be nice to link/group both llm trace and userfeedback on the ui (traces tab) let me check it out and will get back to you.
Sorry, I forgot to attach the pictures
Hey @bukosabino!
After some investigation here are some key points, but first i am assuming you have a function that is using qdrant and openai, will call this function do_llm_stuff()
with_langtrace_root_span()
for the qa_feedback
functioninject_additional_attributes()
function to see the feedback inside the trace, see docs for more contexthere is a working example
from langtrace_python_sdk import with_langtrace_root_span, inject_additional_attributes
@with_langtrace_root_span("LLM")
def main(span_id=None, trace_id=None):
data = {
"spanId": span_id, "traceId": trace_id, "userScore": user_score, "userId": "dummy_id"
}
inject_additional_attributes(do_llm_stuff, data)
SendUserFeedback().evaluate(data=data)
can you try out this snippet and see if it fixes the issue?
with_langrace_root_span
then the trace is not shown on the dashboard.Thanks for the support!
Hey @bukosabino!
following up here again,
qa_feedback
function do not have any llm calls to trace, (but in the background the evaluation is getting saved correctly)again will be working on fixing this and keep you posted
thankyou for your feedback it has been really helpful, if you need any assistance will be more than happy to help
Hello everyone,
I am trying to implement the trace user feedback. And this seems to be working well (the endpoint returns a 200 code response). However, I don't see the span/traces on the online dashboard.
I think the problem is related to the
userId
, which isNone
in my case, because I do not use login on my application.