Open Superhzf opened 4 years ago
Has this issue been resolved? I am also facing the exact same issue
@Superhzf were you able to resolve this issue ?
@leninkumar-sv-tiger unfortunately, no.
I've run into similar issues regarding this health check problem with a custom container and I've solved with the following changes.
When running an inference pipeline, Sagemaker will require you to create your server using a different port than the default 8080, and that port will be set in the SAGEMAKER_BIND_TO_PORT
variable in the environment.
So you will need to do something like:
sm_bind_to_port = os.environ.get('SAGEMAKER_BIND_TO_PORT', '8080')
and then when you'll create your server app, you need to specify that port, for instance:
gunicorn -b '0.0.0.0:{sm_bind_to_port}' ...
I'm not super familiar with how your container handles the ports, but at least in my case, that's what fixed it.
Hi, I'm trying to deploy a custom model pipeline using
sagemaker.pipeline.PipelineModel
. The pipeline model includes two parts, raw data preprocessing and inference. I use the build-in sklearn container to do preprocess and a custom lightgbm container to train the model. Below is the sample code:The lightgbm container is created following this notebook: https://github.com/awslabs/amazon-sagemaker-examples/tree/master/advanced_functionality/scikit_bring_your_own
The raw input data preprocessing is created following this one: https://aws.amazon.com/blogs/machine-learning/preprocess-input-data-before-making-predictions-using-amazon-sagemaker-inference-pipelines-and-scikit-learn/
Error message:
What I did to figure out the problem:
Please let me know what else you need from me to figure out the problem
Update: I can deploy
raw_data_preprocess_inferencee_model
andlightgbm_model
in two different endpoint without problems