aws / amazon-sagemaker-examples

Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠 Amazon SageMaker.
https://sagemaker-examples.readthedocs.io
Apache License 2.0
10.1k stars 6.76k forks source link

Getting error while invoking sagemaker endpoint #245

Closed Harathi123 closed 6 years ago

Harathi123 commented 6 years ago

I created training job in sagemaker with my own training and inference code using MXNet framework. I am able to train the model successfully and created endpoint as well. But while inferring the model, I am getting the following error: ‘ClientError: An error occurred (413) when calling the InvokeEndpoint operation: HTTP content length exceeded 5246976 bytes.’ What I understood from my research is the error is due to the size of the image. The image shape is (480, 512, 3). I trained the model with images of same shape (480, 512, 3).

When I resized the image to (240, 256), the error was gone. But producing another error 'shape inconsistent in convolution' as I the trained the model with images of size (480, 512).

I didn’t understand why I am getting this error while inferring. Can't we use images of larger size to infer the model? Any suggestions will be helpful

Thanks, Harathi

djarpin commented 6 years ago

Thanks @Harathi123 . Payloads for SageMaker invoke endpoint requests are limited to about 5MB. So if you're storing the pixel values as 8 byte floats, then 480 512 3 * 8 will be larger than this 5MB payload limit.

One option for doing inference on larger images might be to pass in an S3 path in your invoke endpoint request and then write your scoring logic to copy the image stored at that S3 path before doing inference.

There may be other ways to get around this, like compressing the image before sending and then decompressing within the container before inference, but these may be very use case specific.

Harathi123 commented 6 years ago

Hi @djarpin, thanks for suggestions. This is my transform function:

def transform_fn(net, data, input_content_type, output_content_type):
    image = json.loads(data)
    nda = nd.array(image)
    prediction = net(nda)
    response_body = json.dumps(decode(prediction.asnumpy()))
    return response_body, output_content_type

This is how i am invoking the endpoint. I am passing numpy array of image.

   img = cv2.imread('image.png')
   img = img.reshape((1, 3, 480, 512))
   img = img.astype('float32')/ 255
   pred = predictor.predict(img)

Can I pass in an S3 path to invoke endpoint request like this?

    pred = predictor.predict(' .....S3 path......')

Thanks, Harathi

andremoeller commented 6 years ago

Hi @Harathi123 ,

You could possibly pass in a dictionary, like

{ 's3_path' : 's3://my-bucket/my-key' }

And then, in your transform function, retrieve the value of s3_path, download that file from S3, and predict on it.

But it seems to me like the image you're invoking with should be small enough since you're using float32 dtype now. Could you tell us what the value of img.nbytes is before predicting with img, and if InvokeEndpoint still says your payload is too large, could you post the stacktrace?

Thanks!

austinmw commented 5 years ago

Hi @djarpin, I could really use your help if possible. Is this 5MB a hard limit that is unaffected by how I change nginx.conf client_max_body_size? What is the limit exactly and where can I find more information about this? Is there any way to increase the limit? It seems very low and is causing a lot of pain and frustration in integrating the endpoint into a production pipeline. My team is currently evaluating these endpoints and this issue is a big one for us.

djarpin commented 5 years ago

Hi @austinmw , Yes, the 5MB is a hard limit imposed by the SageMaker platform as documented here.

Typically exceeding the 5MB limit is cause by:

  1. Sending too many small records in a single request to a live endpoint. In which case, batch transform could be used instead.
  2. Having very large single records (e.g. videos or high resolution images). In which case, storing the file in S3, sending the S3 path, and having the container pick up the S3 object based on the path is a common workaround.

Thanks.

austinmw commented 5 years ago

@djarpin Thanks for your reply. I have a lot of high-res images to process and pulling them from S3 seems very inefficient. Especially if they aren't originally coming from S3 and I have to both upload/download each. How do people typically handle large images in SageMaker?

austinmw commented 5 years ago

@djarpin Hi, also after testing, I believe the max payload size is 5 MiB not 5 MB.

dorg-jmiller commented 5 years ago

If you're using an nginx server as part of your custom Docker image, you may need to change the value of client_max_body_size within your ngix.conf file.

I set client_max_body_size to 0, which allows for an unlimited body size.

austinmw commented 5 years ago

@dorg-jmiller I tried that, but was still running into the 5MiB limit. Have you been able to send a large payload (for ex. 10 MB) by modifying client_max_body_size? AWS phone support told me that 5 MiB was a hard limit regardless, but maybe they were wrong.

Modifying my SavedModel to accept json serialized base64 encoded strings helped to reduce the size of tensors I'm sending significantly though, so this 5 MiB limit is now not as big of an issue (although still a bit of a pain). Without doing so I hit the limit with tensors greater than (5,128,128,3), now I can send up to about (2500,128,128,3).

dorg-jmiller commented 5 years ago

Ah sorry, I missed above that you had already modified this limit in nginx.conf. I'm working with text and not images, so I was only running into the size limit when SageMaker would send data in 6 MB batches (the default).

Sorry again if I'm missing what was discussed above, but is the MaxPayloadInMB parameter when creating a batch transform job not what you want?

austinmw commented 5 years ago

@dorg-jmiller I think the 5 MiB mentioned doesn't affect batch transform jobs, but only live http endpoints. I should probably experiment with ways to take advantage of BT jobs more often, but currently I need realtime inference from stood up endpoints.

going from the json serialized list of Numpy arrays to the json serialized base64 encoded strings helped a lot. Now I'd like to try and switch from RESTful TF Serving to gRPC so I don't need to json serialize at all. Hopefully not too big of a pain to figure out.

dorg-jmiller commented 5 years ago

Gotcha, that makes sense. From the little bit I know, batch transform won't suffice when you need real time inference.

tf401 commented 5 years ago

@austinmw

I've run in into the same problem as you, having numpy array of (3, 218, 525, 3) reaches the limit with my current serialization.

I'm really keen to know more in detail how you serialized your data my best try so far is (frames are np array with the shape above)


import json
import base64

b = base64.b64encode(frames).decode('utf-8')
r = json.dumps([str(frames.dtype),b , frames.shape])

but its no way near your results

Thanks!
SaschaHeyer commented 2 years ago

A more up-to-date answer: Use AWS SageMaker Async Inference https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html

Amazon SageMaker Asynchronous Inference is a new capability in SageMaker that queues incoming requests and processes them asynchronously. This option is ideal for requests with large payload sizes (up to 1GB), long processing times (up to 15 minutes), and near real-time latency requirements. Asynchronous Inference enables you to save on costs by autoscaling the instance count to zero when there are no requests to process, so you only pay when your endpoint is processing requests.