Azure Container Instances allow you to spin up a container workload and just define memory and CPU requirements. It would be great if this was possible with BatchAI to remove the idea of having a cluster.
To be able to deploy a job and in there define memory, CPU and GPU or more generally machine requirements and they be managed for you. Allow the data scientist/developer to just focus on the job itself.
Looking into it a bit, this seems similar to how Google run their ML engine jobs defining a scale tier, although I much prefer Batch AIs method of using custom containers vs ML Engines runtime versions to actually run the jobs 😄
Azure Container Instances allow you to spin up a container workload and just define memory and CPU requirements. It would be great if this was possible with BatchAI to remove the idea of having a cluster.
To be able to deploy a job and in there define memory, CPU and GPU or more generally machine requirements and they be managed for you. Allow the data scientist/developer to just focus on the job itself.
Looking into it a bit, this seems similar to how Google run their ML engine jobs defining a scale tier, although I much prefer Batch AIs method of using custom containers vs ML Engines runtime versions to actually run the jobs 😄