Open zucchini-nlp opened 4 months ago
Sorry, not resolved. The issue is that in LLaVa-VL during pre-processing it's assumed image is (width, height) in size which aligns with another assumption, that best-resolutions are also (width, height). But then in modeling, the size of an image embedding is assumed to be height-first, thus causing confusion and incorrect upadding.
image_feature = image_feature.view(num_patch_height, num_patch_width, height, width, -1)
So, let me know if it's a bug or intended please :)
Hello LLaVa-NeXT team!
I want to clarify some points about the AnyRes technique and how the image feature is unpadded in modeling forward.
As this issue shows, seems like a
get_anyres_image_grid_shape
function is returning height and width swapped, which in turn results in the shortest edge after unpadding being less than 24.For more info, below is the code in
transformers
we adopted from your repo. Functionget_anyres_image_grid_shape
returns height as first element, but later it's mapped tonum_patch_width
. In this scenario if we have an image of 2x1 aspect ratio, the finalimage_feature
after unpadding gets size(4096, 24, smth-smaller than 24)
which seems incorrect. The question is: if it's a bug or intended behavior?Btw, I tried inference both ways, by swapping height and width back. Didn't see actual difference as the chosen images prob were too easy, and the model was attending more to the base-image features