thunlp / LLaVA-UHD

LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images
260 stars 14 forks source link

RuntimeError: Given groups=1, weight of size [1024, 3, 14, 14], expected input[16, 9, 336, 336] to have 3 channels, but got 9 channels instead #5

Open piantic opened 3 months ago

piantic commented 3 months ago

First of all, thank you for publishing a good paper. As you mentioned in the issue, the benchmark performance is overall good.

Unfortunately, the weights are not public now, so I am trying to train the model myself. I was able to train pretrain stage, so it is okay.

But there are some issues in fine-tuning stage. Runtime errors keep occurring during this stage. RuntimeError: Given groups=1, weight of size [1024, 3, 14, 14], expected input[16, 9, 336, 336] to have 3 channels, but got 9 channels instead

I checked the loss and the loss did not change from 0.0. {'loss': 0.0, 'learning_rate': 1.6279069767441862e-06, 'epoch': 0.0}

I suspected your slice_logic and noticed that the output was unusual. But other issues say it's normal, so I don't think this is the problem.

Could you please give me some advice on this?

gordonhu608 commented 3 months ago

Got this runtime error too, "RuntimeError: Given groups=1, weight of size [1024, 3, 14, 14], expected input[2, 9, 336, 336] to have 3 channels, but got 9 channels instead". Has this been solved?

piantic commented 3 months ago

@BubvieyKevin Thank you, let's wait until the code is ready again.

guozonghao96 commented 3 months ago

Thank you for identifying some issues with our code. We have also noticed the same problems and are currently working on resolving them.

gordonhu608 commented 3 months ago

Thanks all authors for this great work. How's the progress concerning addressing this issue?

xrorrim commented 2 months ago

Thanks for report this problem and we have fixed it in the latest version of code.

piantic commented 2 months ago

Thanks a lot. we will test it again

gordonhu608 commented 2 months ago

I just tested the code again and still got this error, RuntimeError: Given groups=1, weight of size [1024, 3, 14, 14], expected input[4, 15, 336, 336] to have 3 channels, but got 15 channels instead. Does this problem also happen to other people?

lucasjinreal commented 2 months ago

Am not able to train either.

However, I still quite not very understand the code, the process_image part actually turns every single image into 336 resolution, why it still interpolate in vit?

Anyone knows on this part?

zyddnys commented 2 months ago

Change https://github.com/thunlp/LLaVA-UHD/blob/main/llava_uhd/train/llava-uhd/train.py#L766 to

if all(x is not None and x.shape == images[0].shape for x in images) and False:
gordonhu608 commented 2 months ago

Change https://github.com/thunlp/LLaVA-UHD/blob/main/llava_uhd/train/llava-uhd/train.py#L766 to

if all(x is not None and x.shape == images[0].shape for x in images) and False:

Does this change fix the training? And how's the training results of replicating LLaVA-UHD?

YFCYFC commented 2 months ago

https://github.com/thunlp/LLaVA-UHD/blob/main/llava_uhd/train/llava-uhd/train.py#L766

No, this does not fix the bug, I still meet the same bug.

ParadoxZW commented 3 weeks ago

Hi, guys @piantic @zyddnys @lucasjinreal @YFCYFC @gordonhu608 @guozonghao96

I've released another implementation of LLaVA-UHD here, which I believe is more stable and elegant. The code of the new repo originates from this repo, but its overall quality is improved, and the training program is tested to be able to normally run without bugs.

When I reviewed this old repo and tried to fix this RuntimeError issue, I found it contains a lot of hidden bugs and calculations with wrong logic (violating the spirit of the original paper), and misses some necessary process (such as, image normalization). So I decided to rewrite the code and try my best to fix all these issues. Now I open-sourced my rewritten version.

You are very welcome to use it, and I look forward to your feedback.