opea-project / GenAIExamples

Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
https://opea.dev
Apache License 2.0
278 stars 194 forks source link

[Doc] The ChatQnA Readme for customizing UI front end service may be incomplete #921

Open mkbhanda opened 1 month ago

mkbhanda commented 1 month ago

Priority

P4-Low

OS type

Ubuntu

Hardware type

Xeon-GNR

Installation method

Deploy method

Running nodes

Single Node

What's the version?

V 1.0 using images from docker hub.

Description

https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon We provide instructions to change out the simple curl call with a conversational UI. The instructions require more than simple cut and paste/replace in the compose file because the instructions mention fewer environment variables and veer off from using the $ENV variables in the original compose file.

Our getting started ChatQnA guide while rich in other ways such as capturing metrics appears to skip this customization instruction in https://opea-project.github.io/1.0/GenAIExamples/ChatQnA/README.html

Also need to remind using to modify the Nginx dependency from ui to conversation-ui ..

Reproduce steps

Just view and follow along the instructions in Readme.md

Raw log

No response

surfiniaburger commented 1 month ago

Hi @mkbhanda! I made a draft on this at https://docs.google.com/document/d/1MPPpFC9WYpR1c6LGjWLLWB_cut8Bi5-AhEtm76x8K0U/edit?usp=sharing. Can I be assigned this issue? I would love to know what steps I could take to make it fit the description you have outlined.

mkbhanda commented 3 weeks ago

Thank you @surfiniaburger for self assigning!! I looked at your google docs link and appreciate your careful analysis of the situation. I like how you suggest making a backup first of the compose file. Would you kindly consider making a script "use_conversation_ui.sh" or a name you prefer that just does all the relevant changes to reduce human error and add a test in chatQnA that alerts that the script may be broken if any of those environment variables change (endpoints, ports ..). This is our flagship example and worth keeping in ship shape.