kserve / kserve

Standardized Serverless ML Inference Platform on Kubernetes
https://kserve.github.io/website/
Apache License 2.0
3.51k stars 1.05k forks source link

inferencing pytorch with curl is returning a HTTP/1.1 404 Not Found error. #1024

Closed taylanates24 closed 4 years ago

taylanates24 commented 4 years ago

/kind bug

What steps did you take and what happened: I installed kfserving with a minikube kubernetes cluster following the guide https://github.com/kubeflow/kfserving/blob/master/README.md. I used kfserving master branch and Kubernetes v1.18.3. And then, I followed the steps on https://github.com/kubeflow/kfserving/tree/master/docs/samples/pytorch I successfully created inferenceservice.serving.kubeflow.org/pytorch-cifar10. But inference with curl was not successful. I used the following names:

MODEL_NAME=pytorch-cifar10
INPUT_PATH=@./input.json
SERVICE_HOSTNAME=$(kubectl get inferenceservice pytorch-cifar10 -o jsonpath='{.status.url}' | cut -d "/" -f 3)
export INGRESS_HOST=$(minikube ip)
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

Because the EXTERNAL-IP value is none (or perpetually pending), my environment does not provide an external load balancer for the ingress gateway I use the following command:

curl -v -H "Host: ${SERVICE_HOSTNAME}" -d $INPUT_PATH http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/$MODEL_NAME:predict

and it gives the following output:

   Trying 192.168.39.161...
* TCP_NODELAY set
* Connected to 192.168.39.161 (192.168.39.161) port 31603 (#0)
> POST /v1/models/pytorch-cifar10:predict HTTP/1.1
> Host: 
> User-Agent: curl/7.58.0
> Accept: */*
> Content-Length: 110681
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
> 
< HTTP/1.1 100 Continue
< HTTP/1.1 404 Not Found
< date: Fri, 14 Aug 2020 14:09:58 GMT
< server: istio-envoy
< connection: close
< content-length: 0
< 
* we are done reading and this is set to close, stop send
* Closing connection 0

I didn't understand what the problem is, and how I can solve this problem. Please help me, thank you in advance.

Anything else you would like to add:

I created a namespace as:

kubectl create namespace pytorch-test

after that;

kubectl apply -f pytorch.yaml -n pytorch-test

when I run the following command:

kubectl get inferenceservices pytorch-cifar10 -n pytorch-test

It gives the output:

NAME              URL                                               READY   DEFAULT TRAFFIC   CANARY TRAFFIC   AGE
pytorch-cifar10   http://pytorch-cifar10.pytorch-test.example.com   True    100                                15m

So, I am not sure if the url "http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/$MODEL_NAME:predict" is true. What should the url be? I think the problem is the wrong url.

What did you expect to happen: I expect to see the predictions after the curl command.

Environment:

issue-label-bot[bot] commented 4 years ago

Issue-Label Bot is automatically applying the labels:

Label Probability
area/inference 0.74

Please mark this comment with :thumbsup: or :thumbsdown: to give our bot feedback! Links: app homepage, dashboard and code for this bot.

issue-label-bot[bot] commented 4 years ago

Issue Label Bot is not confident enough to auto-label this issue. See dashboard for more details.

taylanates24 commented 4 years ago

I solved the issue by updating the SERVICE_HOSTNAME from SERVICE_HOSTNAME=$(kubectl get inferenceservice pytorch-cifar10 -test -o jsonpath='{.status.url}' | cut -d "/" -f 3) to SERVICE_HOSTNAME=$(kubectl get inferenceservice pytorch-cifar10 -n pytorch-test -o jsonpath='{.status.url}' | cut -d "/" -f 3)

yuzisun commented 4 years ago

@taylanates24 Do you think we should make the instruction more clear?

Sangauppe commented 1 year ago

INGRESS_HOST}:${INGRESS_PORT}/v1/models/$MODEL_NAME:predict

nguyenthai0107 commented 8 months ago

Hello @Sangauppe @yuzisun
I facing same issue with status code 500 error when call to sklearn-iris `curl: (6) Could not resolve host: POST

shakteebiswal commented 7 months ago

@nguyenthai0107 Facing the same issue any idea on how to resolve?