Is your feature request related to a problem? Please describe.
Currently we're trying to scale up parsr on our Kubernetes Cluster by using KEDA but the issue that we're facing is that we cannot be sure that the later GET requests to get the progress and the result, will arrive on the same pod to which I initially submitted the document to parse with a POST. This is causing the autoscaling to not work properly.
Describe the solution you'd like
I want an event driven approach, where I "fire-and-forget" submit a document to a service (without knowing the worker), get a pointer back, and eventually get the results (by polling or push).
Specifically, I would like the pod to:
send updates about it's progress (initial receipt of the request, interim progress and final status) to be stored e.g. in a key-value store (like DynamoDB) or other fast database.
send the results to an document store (e.g. S3, or maybe DynamoDB if small enough).
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
We're using only "1 POD" for now.
Additional context
Add any other context or screenshots about the feature request here.
Is your feature request related to a problem? Please describe. Currently we're trying to scale up parsr on our Kubernetes Cluster by using KEDA but the issue that we're facing is that we cannot be sure that the later GET requests to get the progress and the result, will arrive on the same pod to which I initially submitted the document to parse with a POST. This is causing the autoscaling to not work properly.
Describe the solution you'd like
I want an event driven approach, where I "fire-and-forget" submit a document to a service (without knowing the worker), get a pointer back, and eventually get the results (by polling or push).
Specifically, I would like the pod to:
Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
We're using only "1 POD" for now.
Additional context Add any other context or screenshots about the feature request here.