Closed parano closed 5 years ago
Similar to the SageMaker and Serverless deployment BentoML currently provide, add support for Heroku platform
Add support for easily creating and configuring ML services with multiple machine learning models
Add support for deploying from Kubeflow project's training workflow
A stateful server that tracks all your desired deployment state, deployment history and event logs. Allow users to interact via CLI, API and web UI, and talks to cloud platforms or kubernetes cluster for scheduling deployments.
Currently the BentoML generated docker images are not compatible with GPU environment and we are adding support for generating images that can utilize GPUs when serving a model
Use tf-serving as tensorflow model backend, BentoML API server will handle REST API, request parsing, preprocessing and send GRPC to tf-serving for inferencing with/without GPU
Closing in favor of the roadmap section in the upcoming BentoML guides
Hi @parano , you mention the roadmap section but I couldn't find it in the official documentation. Could you please provide a link to your official roadmap? Thank you.
This is living thread giving an overview of planned BentoML features on our roadmap - would love to hear your feedback. Join more discussion in our slack channel here: http://bit.ly/2N5IpbB