Closed nossu3751 closed 2 months ago
@nossu3751 Your issue is the same with #410
Please refer to #410. Try suggestions in 410 and feedback some more information, thanks! BTW, could you show the link where you follow to build and run ChatQnA Example.
@nossu3751
CPU: 13th Gen Intel(R) Core(TM) i7-1365U Is your hardware a laptop?
Please follow the instructions here: https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker/aipc/README.md
It should be slow to run ChatQnA on laptop. It's for Xeon (CPU on data center server) and Gaudi.
Service logs should show how their initialization is progressing. Once service processes get idle in top
, they should be ready to accept incoming messages.
@nossu3751 ,
What's the status, could you run ChatQnA Examples?
Close as no more actions needed.
Hello,
I am experiencing issues while trying to run the microservices in the ChatQnA example of the OPEA Project. Below are the details of my hardware and software environment:
Hardware Setting:
CPU: 13th Gen Intel(R) Core(TM) i7-1365U RAM: 32GB OS Version: Windows 11
I successfully built the required Docker images without any errors. Here is the list of images:
REPOSITORY TAG IMAGE ID CREATED SIZE opea/chatqna-conversation-ui latest bb15086aed9d 19 minutes ago 43.9MB opea/chatqna-ui latest 97176f37e636 29 minutes ago 1.5GB opea/chatqna latest 8cbcd63cd330 32 minutes ago 758MB opea/dataprep-redis latest 75c08c5db8b2 38 minutes ago 4.16GB opea/llm-tgi latest 2b36da215e7e 42 minutes ago 2.42GB opea/reranking-tei latest 11883759ed1f 43 minutes ago 2.55GB opea/retriever-redis latest d15554844f89 45 minutes ago 3.75GB opea/embedding-tei latest cfae34306b4c 49 minutes ago 3.48GB ghcr.io/huggingface/text-generation-inference 2.1.0 163c78bc6f7d 2 weeks ago 10.7GB ghcr.io/huggingface/text-embeddings-inference cpu-1.2 51c71b7cb250 2 months ago 637MB redis/redis-stack 7.2.0-v9 59d6058ec513 4 months ago 791MB
Issue:
When I try to run each microservice using the computer's IP address, some services run successfully, for example:
TEI Embedding Service Retriever
However, other microservices fail with various errors. Here are some examples of the issues I encounter:
2. Embedding Microservice
Error Message: curl : Internal Server Error At line:1 char:1
5. Reranking Microservice
curl : Internal Server Error At line:1 char:1
6. TGI Service
curl : Unable to connect to the remote server At line:1 char:1
7. LLM Microservice
curl : Unable to connect to the remote server At line:1 char:1
The logs might not be very detailed and may not contain meaningful messages to help resolve the issue. Despite this, I would appreciate any assistance in pinpointing common areas between the failing microservices that might be causing these issues.
Thank you for your assistance!