Open CLRafaelR opened 1 year ago
@CLRafaelR Thank you for trying out my article and providing feedback.
The logs you see on the server side appear to be from the following code: https://github.com/mahm/custom_chatbot_server/blob/main/app/server/app.py#L56
Since you mentioned that there is a JSON parsing failure on the Chatbot UI side, it would be good to first check if the line just below that code, json.dumps(json_data), is returning a complete JSON response.
Additionally, if you can provide more information about which part of the Chatbot UI code (file and line number) is causing the parsing error and the specific response that triggers the issue, I might be able to assist you further. It seems that the Chatbot UI code has changed significantly since I last tested it.
Thank you for your prompt response. I have added a "print debug" before line yield f"data: {json.dumps(json_data)}\n\n"
and logging.info(f"reply: {text}")
each. The results are as follows:
INFO: 127.0.0.1:57725 - "POST /v1/chat/completions HTTP/1.1" 200 OK
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ''}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': 'Of'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' course'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ','}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' I'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' will'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' do'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' my'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' best'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' to'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' assist'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' you'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': '.'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' What'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' can'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' I'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' help'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' you'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ' with'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': '?'}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': '', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ''}}]}
{'id': '520f66bc-373d-493d-a2a2-6510ea07f5b6', 'object': 'text_completion', 'created': 1687757834, 'model': 'simple-conversation-chat', 'choices': [{'text': 'Of course, I will do my best to assist you. What can I help you with?', 'index': 0, 'logprobs': None, 'finish_reason': 'length', 'delta': {'content': ''}}]}
INFO:root:reply: Of course, I will do my best to assist you. What can I help you with?
However, these results differ from the response format specified in the official documentation of OpenAI (See Create chat completion).
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello there, how may I assist you today?",
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}
Therefore, I have revised the code for the streaming_response()
function as per the following modifications:
text
key from json_without_choices["choices"]
."message": {"role": "assistant", "contents": ""}
to json_without_choices["choices"]
.text
variable within "message": {"role": "assistant", "contents": ""}
.Please find the updated code below:
async def streaming_response(json_data, chat_generator):
json_without_choices = json_data.copy()
json_without_choices["choices"] = [
{
# "text": "",
"index": 0,
"message": {
"role": "assistant",
"content": "",
},
"logprobs": None,
"finish_reason": "length",
"delta": {
"content": "",
},
}
]
# logging.info(f"Sending initial JSON: {json_without_choices}")
yield f"data: {json.dumps(json_without_choices)}\n\n" # NOTE: EventSource
text = ""
for chunk in chat_generator:
text += chunk
# print(chunk)
json_data["choices"][0]["delta"] = {"content": chunk}
# logging.info(f"Sending chunk: {json.dumps(json_data)}")
print(json.dumps(json_data))
yield f"data: {json.dumps(json_data)}\n\n" # NOTE: EventSource
# print(json.dumps(json_data))
json_data["choices"][0]["message"] = {"content": text}
logging.info(f"{text}")
# print(text)
yield f"data: {json.dumps(json_data)}\n\n" # NOTE: EventSource
print(json.dumps(json_data))
yield "data: [DONE]\n\n" # NOTE: EventSource
The revised code appears to adhere to the format specified in the official documentation of OpenAI:
INFO: 127.0.0.1:58146 - "POST /v1/chat/completions HTTP/1.1" 200 OK
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": ""}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": "Of"}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": " course"}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": "!"}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": " What"}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": " can"}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": " I"}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": " help"}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": " you"}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": " with"}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": "?"}}]}
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": ""}}]}
INFO:root:Of course! What can I help you with?
{"id": "a2e96a52-8472-41be-807f-e2f30c7d48b7", "object": "chat.completion", "created": 1687758609, "model": "simple-conversation-chat", "choices": [{"index": 0, "logprobs": null, "finish_reason": "length", "delta": {"content": ""}, "message": {"content": "Of course! What can I help you with?"}}]}
The outcome is not yet visible in the web user interface of the Chatbot UI. I have a suspicion that StreamingResponse()
within @app.post(...)
might not be producing a valid JSON format, yet I am unable to display the outcome. Could you please advise on the appropriate method to print the result of StreamingResponse()
?
@mahm
My colleague has successfully identified the cause of the malfunction. It appears that the code added in Commit 25a4dbb052542898a48695c27cc8cefec28b5756 is responsible for the issue, as it seems that the JSON data being sent from the server to the ChatBot UI is not being received correctly. Therefore, we are able to establish proper communication between the latest ChatBot UI and the server's JSON data, by commenting out the code that the following link indicates: https://github.com/mckaywrigley/chatbot-ui/blob/fa3f6e93bbe0d1ff9f208ddefae6fc7dfb738dc7/utils/server/index.ts#L95-L98
Thank you for your prompt response once again, and I highly appreciate your brilliant idea that allows us to seamlessly integrate our customized LangChain workflow with Chatbot UI!
I apologize for the inconvenience, and I appreciate your continuous communication. However, simply commenting out the code as mentioned in the previous comment results in the following error on the command line of Chatbot UI:
error - SyntaxError: Unexpected token 'D', "[DONE]" is not valid JSON
at JSON.parse (<anonymous>)
at onParse (webpack-internal:///(middleware)/./utils/server/index.ts:70:43)
at parseEventStreamLine (webpack-internal:///(middleware)/./node_modules/eventsource-parser/dist/index.js:78:9)
at Object.feed (webpack-internal:///(middleware)/./node_modules/eventsource-parser/dist/index.js:66:7)
at Object.start (webpack-internal:///(middleware)/./utils/server/index.ts:85:24)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
This error is caused by the removal of the following code in commit 25a4dbb052542898a48695c27cc8cefec28b5756:
if (data === '[DONE]') {
controller.close();
return;
}
To resolve the error, we may need to restore the above code. In fact, restoring the code resolves the error above.
I have tryed to deploy a customised GPT model (the ones you implemented in
simple_conversation_chat.py
andsummary_conversation_chat.py
) to Chatbot UI. I established the server for the customised models according to the instructions provided in your article of note, and then launched the server for Chatbot UI. Unfortunately, the response from the customised models is not appearing in Chatbot UI. Instead, it is being displayed in the server's command line. The responses in the command line always begin with the textINFO:root:reply:
. Additionally, an error message ([SyntaxError: Unexpected token I in JSON at position 0]
) appeared on the command line of Chatbot UI's server. Could you advise me on what steps I should take to remedy this situation?