Start the inference server. This serves the gRPC server running at port 8085 and http server running at 8080
$ python start_runtime.py
Start the client the sends text inputs to gRPC and http (Note: I am facing issue with http not responding with predictions). I tried cloning the caikit repository and tried to run the example directly and http is responding withe predictions correctly. So the issue is something to do with the way that I am trying to reproduce the example.
✗ python client.py
Text: I am not feeling well today
Response from gRPC: classes {
class_name: "NEGATIVE"
conf: 0.99970537424087524
}
Text: Today is a nice sunny day
Response from gRPC: classes {
class_name: "POSITIVE"
conf: 0.999869704246521
}
Text: I am not feeling well today
RESPONSE from HTTP: {
"detail": "Not Found"
}
Text: Today is a nice sunny day
RESPONSE from HTTP: {
"detail": "Not Found"
}
This nb is based on the resources found in this comment
https://github.com/manisnesan/fastchai/issues/57#issue-1938054138
Start the inference server. This serves the gRPC server running at port 8085 and http server running at 8080
Start the client the sends text inputs to gRPC and http (Note: I am facing issue with http not responding with predictions). I tried cloning the caikit repository and tried to run the example directly and http is responding withe predictions correctly. So the issue is something to do with the way that I am trying to reproduce the example.