opea-project / GenAIExamples

Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
https://opea.dev
Apache License 2.0
216 stars 132 forks source link

OPEA project errors from TCS #410

Closed preethivenkatesh closed 1 month ago

preethivenkatesh commented 1 month ago

Repo used for testing https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker/xeon

OPEA Project Errors

  1. Embedding Microservice

curl : Internal Server Error At line:1 char:1

5.Reranking Microservice

curl : Internal Server Error At line:1 char:1

  1. TGI Service

curl -Uri "http://192.168.4.196:9009/generate" -Method Post -Body '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' -ContentType 'application/json'

  1. LLM Microservice

curl -Uri "http://192.168.4.196:9009/generate" -Method Post -Body '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' -ContentType 'application/json'

  1. Megaservice

curl http://${host_ip}:8888/v1/chatqna -H "Content-Type: application/json" -d '{ "messages": "What is the revenue of Nike in 2023?" }'

yinghu5 commented 1 month ago

We noticed there are Microsoft.PowerShell.Commands.InvokeWebRequestCommand information, could you please help to check how the customer try the ChatQ&A?

Do they use one machine with the Windows PowerShell system to access another Linux machine?

First, from the message, it seems the main issue are the network error,

Like

2. Embedding Microservice

curl : Internal Server Error
At line:1 char:1
+ curl -Uri http://192.168.4.196:6000/v1/embeddings -Method Post -Bod ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebExc
   eption
    + FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand

The microservice expected one external Public IP address, if they use the local IP address like 192.168.4.196, then they have to run the command from the internal network same as 192.168.4.xxx.

Xigui have one validation guide: GenAIExamples/ChatQnA/docker/gaudi/how_to_validate_service.md at main · opea-project/GenAIExamples (github.com)

The expected IP address is usually like ${host_ip}= external IP address

Secondly, can’t use the same port for different services :

  1. TGI Service

curl -Uri http://192.168.4.196:9009/generate -Method Post -Body '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' -ContentType 'application/json'

  1. LLM Microservice

curl -Uri http://192.168.4.196:9009/generate -Method Post -Body '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' -ContentType 'application/json'

The expected port should be https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker/gaudi/how_to_validate_service.md

image

xiguiw commented 1 month ago

One more thing, try curl -vv -X POST http://${host_ip}:6000/v1/embeddings

  1. replace the host_ip as your host ip addres,
  2. Observe if there is network proxy issue. If you find it try to connect through proxy, put your host_ip into no_proxy.
xiguiw commented 1 month ago

Close as no more actions.