google-gemini / generative-ai-android

The official Android library for the Google Gemini API
https://ai.google.dev/gemini-api/docs/get-started/tutorial?lang=android
Apache License 2.0
738 stars 163 forks source link

Request payload size exceeds the limit: 4194304 bytes. #26

Closed meet2602 closed 11 months ago

meet2602 commented 11 months ago

image

private fun imageConvertText(uri: Uri) {
        loadingDialog.show()
        try {
            Log.d("result", "bitmap start")
            val bitmap =
                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
                    val source = ImageDecoder.createSource(this.contentResolver, uri)
                    ImageDecoder.decodeBitmap(source)
                } else {
                    MediaStore.Images.Media.getBitmap(this.contentResolver, uri)
                }
            val baos = ByteArrayOutputStream()
            bitmap.compress(Bitmap.CompressFormat.JPEG, 100, baos)
            val size = baos.toByteArray()
            Log.d("result", size.size.toString())
            Log.d("result", "bitmap completed")
            val inputContent = content {
                image(bitmap)
                text("Read text from image")
            }
            Log.d("result", "api start")

            try {
                var mainResult = ""
                CoroutineScope(Dispatchers.IO).launch {
                    generativeModel.generateContentStream(inputContent)
                        .collect { response ->
                            val result = response.text.toString().trim()
                            Log.d("result", "api end")
                            runOnUiThread {
                                loadingDialog.dismiss()
                                if (result.isNotEmpty()) {
                                    mainResult += result
                                    Log.d("result", "set Text")

                                    menuBinding.outputImg.setImageBitmap(null)
                                    menuBinding.outputImg.setImageBitmap(bitmap)
                                    menuBinding.outputImg.visible()
                                    menuBinding.edNumber.setText(mainResult)
                                } else {
                                    Log.d("result", "No text found in the photo")
                                    menuBinding.outputImg.gone()
                                    menuBinding.outputImg.setImageBitmap(null)
                                    showToast("No text found in the photo")
                                }
                            }
                        }
                }
            } catch (e: Exception) {
                e.printStackTrace()
                loadingDialog.dismiss()
                showToast(e.message.toString())
            }

        } catch (e: Exception) {
            e.printStackTrace()
            loadingDialog.dismiss()
            showToast(e.message.toString())
        }
yveskalume commented 11 months ago

This may be caused by the size of your image. Actually the limit size of images supported by gemini-pro-vision is 4MB.

chaneylc commented 11 months ago

I'm running into this as well when inputting images that are less than 4Mb. I'm using the gemini-pro-vision model, which throws the stacktrace on inputs less than 4Mb: com.google.ai.client.generativeai.type.ServerException: Request payload size exceeds the limit: 4194304 bytes.

I noticed the sample app uses images of 768x768, which does work.

daymxn commented 11 months ago

I'm running into this as well when inputting images that are less than 4Mb. I'm using the gemini-pro-vision model, which throws the stacktrace on inputs less than 4Mb: com.google.ai.client.generativeai.type.ServerException: Request payload size exceeds the limit: 4194304 bytes.

I noticed the sample app uses images of 768x768, which does work.

Hmm that's interesting. Would you be able to provide an example image that causes this?

chaneylc commented 11 months ago

So all I'm doing is capturing an image using MediaStore's ACTION_IMAGE_CAPTURE intent. This creates a ~6Mb picture for my test device. I'm also focusing on popular 4:3 aspect ratios, so I scale the image down to meet the 4Mb limit, while keeping the ratio.

2048x1536 does not work, even though it is 2.5Mb in the file system

The largest image I have got to work is 1152x864, which is 1.8Mb.

This is all with only one image in the content builder, no text prompting. I have tested with .jpg and .bmp, jpeg compression was:

bmp.compress(Bitmap.CompressFormat.JPEG, 80, stream)

Interestingly, if I take a completely black picture, it does process at the higher resolutions, and does correctly limit payloads larger than 4Mb.

davidmotson commented 11 months ago

The ImagePart is provided as a convenience to make sending images easier, it automatically converts the images to a PNG formatting of max quality. If you are custom-compressing the image to fit a size limit, it may be better for you to use BlobPart so you can manually specify and control the encoding and compression.

Also of note, is that images get converted to Base64 while being sent to the server, increasing their size by a further 35%

However, your example of a file that is 2.5mb on the file system should work if sent as a BlobPart (since this would be approximately 3.4MB).

Your code example can be changed like this

val baos = ByteArrayOutputStream()
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, baos)
val inputContent = content {
  blob("image/jpeg", boas.toByteArray())
  text("Read text from image")
}
chaneylc commented 11 months ago

Thanks for the response, that does answer some questions. I think in that case the Android documentation is a bit deceptive: https://ai.google.dev/docs/gemini_api_overview#image_requirements

  1. "Maximum of 4MB for the entire prompt, including images and text" should be maximum of 4MB post-encoding
  2. "Images must be in one of the following image data MIME types:

    PNG - image/png JPEG - image/jpeg WEBP - image/webp HEIC - image/heic HEIF - image/heif"

From what I can find, the documentation doesn't describe the BlobPart, and the code documentation doesn't differentiate between BlobPart and ImagePart, it says 'represents binary' or 'represents image' data. Is there more documentation on this?