joeyz0z / ConZIC

Official implementation of "ConZIC: Controllable Zero-shot Image Captioning by Sampling-Based Polishing"
MIT License
72 stars 17 forks source link

About batch_ size #4

Open Rouchestand opened 1 year ago

Rouchestand commented 1 year ago

Set caption_img_path to ./examples and batch_size to 2. Running run.py will result in the following error: Traceback (most recent call last): File "run.py", line 162, in run_control(run_type, args, img_dir, lm_model, lm_tokenizer, clip, token_mask) File "run.py", line 87, in run_control gen_texts, clip_scores = control_generate_caption(lm_model, clip, lm_tokenizer, image_instance, token_mask, File "/home/bishe/prc/ConZIC-main/ConZIC-main/control_gen_utils.py", line 200, in control_generate_caption generate_texts, clip_scores = sentiment_sequential_generation(model, clip, tokenizer, image_instance, token_mask, prompt, File "/home/bishe/prc/ConZIC-main/ConZIC-main/control_gen_utils.py", line 54, in sentiment_sequential_generation topk_inp[:, ii + seedlen] = idxs RuntimeError: The expanded size of the tensor (400) must match the existing size (200) at non-singleton dimension 0. Target sizes: [400]. Tensor sizes: [200]

joeyz0z commented 1 year ago

Currently, our released code is a simple demo, only supporting batch_size=1, we will consider updating our released code in the future.

Rouchestand commented 1 year ago

Thank you for your warm answer. Looking forward to your code update

Lxb-Code-Dev commented 1 year ago

Can you tell me the batch-size value of the experiment in the paper?

joeyz0z commented 1 year ago

We set batch_size==1 in our paper.

Lxb-Code-Dev commented 1 year ago

Thank you for your reply.