When I ask a question in Lex using the Test Chatbot - it sends the query to lex-codehook-fn and the Lambda function starts working. Using DEBUG I can see that is gets my question, produces a Prompt for the LLM and even returns the Top 5 internally. but for some reason (even when I make the threshold 0.1) I never get an answer returned?
In fact I changed the OUT_OF_DOMAIN_RESPONSE for try and except and what is happening is that the "try" is failing over to except. - this seems to mean that the model does not have confidence in the answers even though they are coming in with a confidence of .6 and higher?
For this reason I am NEVER getting a response in the Chatbot Test except '"I'm sorry, but I am only able to give responses regarding the source topic"
This inspite of the fact that the Function actually did work and has an 'answer' ???
When I ask a question in Lex using the Test Chatbot - it sends the query to lex-codehook-fn and the Lambda function starts working. Using DEBUG I can see that is gets my question, produces a Prompt for the LLM and even returns the Top 5 internally. but for some reason (even when I make the threshold 0.1) I never get an answer returned?
query_engine = RetrieverQueryEngine(retriever=retriever, response_synthesizer=synth) query_input = event["inputTranscript"]
In fact I changed the OUT_OF_DOMAIN_RESPONSE for try and except and what is happening is that the "try" is failing over to except. - this seems to mean that the model does not have confidence in the answers even though they are coming in with a confidence of .6 and higher?
For this reason I am NEVER getting a response in the Chatbot Test except '"I'm sorry, but I am only able to give responses regarding the source topic"
This inspite of the fact that the Function actually did work and has an 'answer' ???