langchain-ai / langsmith-docs

Documentation for langsmith
https://docs.smith.langchain.com/
MIT License
76 stars 32 forks source link

Issue related how to trace current run and attache the feedbacks #405

Closed CronusCipher closed 2 days ago

CronusCipher commented 3 weeks ago

Hi I am working on the LLM application and It is programmed base on RAG archituctre by using LangChain features. I used Streamlit for my front end. Now I want to use LangSmith for tracing each answer and take a feedback and score from users. I already connected my project with LangSmith dashboard with API and I can see each prompt that the user enters. But unfortunately, I could not attach the feedback to the answer. I have a feature for thumb up and thumb down by using streamlit feedback packages. And also I found the run_id by running traces in the longchain. but whenever I want to assign this run_id to create_feedback() function, I faced to this error :

TypeError: Client.create_feedback() missing 1 required positional argument: 'run_id'

I checked LangSmith and Langchain documentation but I could not find anything related to this issue. Thanks for your help in advance

st.write("Initialize Retriever")
vector_storage_dir = "./vectorstorage/English/faiss"

retriever = get_standard_retriever(dir=vector_storage_dir)

if "qachain" not in st.session_state:
    st.session_state["qachain"] = initialize_qa_chain(retriever)

status.update(label="Initialization complete!", state="complete", expanded=False)

with st.chat_message("assistant"):
    response = st.session_state["qachain"](prompt,callbacks=[run_collector]) # ONLY FEATURE USED FROM RAG

    with tracers.context.collect_runs() as cb: #collect data for each run
        run_id=(run_collector.traced_runs[0]).id
        client = Client

    message_placeholder = st.empty()
    text = ""
    for chunk in response["result"] :  
        text += chunk 
        message_placeholder.markdown(text + "β–Œ")
        time.sleep(0.015)  # Adjust the delay time (in seconds) as needed

    if run_id:
        client.create_feedback(
        run_id,
        key="feedback-key",
        score=1.0,
        comment="comment"
        )

        # Define score mappings for both "thumbs" and "faces" feedback systems
        score_mappings = {
            "thumbs": {"πŸ‘": 1, "πŸ‘Ž": 0},
            "faces": {"πŸ˜€": 1, "πŸ™‚": 0.75, "😐": 0.5, "πŸ™": 0.25, "😞": 0},
        }

        # Get the score mapping based on the selected feedback option
        scores = score_mappings["thumbs"]

st.session_state.messages.append({"role": "assistant", "content": response["result"]})`

`

shershen08 commented 1 week ago

maybe someone can enlighten me what a feedback-key means ? Docs are silent - https://docs.smith.langchain.com/old/tracing/faq/logging_feedback - it's just in the code example and no details where to get it. I've tried the API token (LANGCHAIN_API_KEY) but with no luck :

error.js:88 Uncaught (in promise) Error: Failed to create feedback. Received status [401]: . Server response: {"detail":"Invalid token"}
    at raiseForStatus (error.js:88:11)
    at async Client.createFeedback (client.js:1923:9)
hinthornw commented 2 days ago

maybe someone can enlighten me what a feedback-key means ?

Hard to describe without being circular. key is the name associated with this feedback type /metric.

So it would be "correctness" or "user_score" or "accuracy" or "my-cool-name". Then all feedback with that key is aggregated as metrics.

it looks like you're trying to send a request without authentication there

hinthornw commented 2 days ago

To OP - you're never actually creating a client:

client = Client

Should be

client = Client()