Open lukasmartinelli opened 6 years ago
Adding a use case here (cc @vsmart): loading large self-contained machine learning models once and using ecs-watchbot to scale it out on cpu workers running it on a large amount of images. The models are read-only and will always be the same. At the moment we simply use a large batch size per worker to amortize the model downloading on each worker; but this limits scale out.
Adding a use case here (cc @vsmart): loading large self-contained machine learning models once and using ecs-watchbot to scale it out on cpu workers running it on a large amount of images. The models are read-only and will always be the same. At the moment we simply use a large batch size per worker to amortize the model downloading on each worker; but this limits scale out.
I second that use case. Hit that use case with NLP models before too that take some minutes to download 👍
Is there a narrative for containers that would like to load a big dataset ahead of time and then operate in turbo mode with persisting that data across job invocations?
Aka amortizing long startup time over multiple job invocations?
This applies to applications that need to load a model or a graph or a database in order to execute the job and want to keep that in memory or on disk in between runs.
Options on the top of my head:
/cc @rclark @jakepruitt