Describe the feature you'd like
The feature I am proposing would be to implement some sort of warm pool (similar to estimator) or scheduled run to keep infra up to allow the processing job to take only as long as the script runtime. Requesting this due to my 30s script runtime taking 8min 17s total bc of infra allocation.
How would this feature be used? Please describe.
This feature would be used to cut down on processing time and reduce latency
Describe alternatives you've considered
Alternatives I have considered are sagemaker notebook jobs, and lambda container.
Additional context
The goal of my processing job would read in a file, process and index it (create vector embeddings and add to docstore) using the library of my choice (langchain, haystack, etc)
Did you consider Local Mode? We use this feature for local prototyping and it works well except some limitations in the context of Pipeline and Experiment integration, e.g. #4114.
Describe the feature you'd like The feature I am proposing would be to implement some sort of warm pool (similar to estimator) or scheduled run to keep infra up to allow the processing job to take only as long as the script runtime. Requesting this due to my 30s script runtime taking 8min 17s total bc of infra allocation.
How would this feature be used? Please describe. This feature would be used to cut down on processing time and reduce latency
Describe alternatives you've considered Alternatives I have considered are sagemaker notebook jobs, and lambda container.
Additional context The goal of my processing job would read in a file, process and index it (create vector embeddings and add to docstore) using the library of my choice (langchain, haystack, etc)