aws / sagemaker-inference-toolkit

Serve machine learning models within a 🐳 Docker container using 🧠 Amazon SageMaker.
Apache License 2.0
372 stars 82 forks source link

Enhance UX for inference #64

Open ehsanmok opened 3 years ago

ehsanmok commented 3 years ago

The current SageMaker module wrapping process makes debugging very hard for both training and deployment. Now for inference, decorator is the simplest kind of solutions. For example, instead of requiring users to provide model_fn (which currently takes only one argument model_dir and basically if a model needs more argument to initialize it'd make the users frustrated), we can have a decorator like

@sagemaker.model_fn
def foo(*args, **kwargs): # one arg should be named model_dir for example
    ...

(Same goes for transform_fn, input_fn, output_fn). Then the decorator while expansion, looks for model_dir and everything else follows.