RLHF-V / RLAIF-V

RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness
210 stars 6 forks source link

keyerror:idx,I have changed the data_dir,but when I run the train script,I occured the error.How to fix #7

Closed XiaoLei2123 closed 3 months ago

XiaoLei2123 commented 3 months ago

Traceback (most recent call last): File "/running_package/code_package/./muffin/train/train_llava15.py", line 338, in train(attn_implementation="flash_attention_2") File "/running_package/code_package/./muffin/train/train_llava15.py", line 313, in train model, data_module, tokenizer = init_model( File "/running_package/code_package/./muffin/train/train_llava15.py", line 279, in init_model data_module = make_dpo_data_module(tokenizer, data_args=data_args, reference_model=copy.deepcopy(model).cuda()) File "/running_package/code_package/./muffin/train/train_llava15.py", line 149, in make_dpo_data_module train_dataset = DPODataset(tokenizer=tokenizer, File "/running_package/code_package/./muffin/train/train_llava15.py", line 133, in init self.list_data_dict = RLAIFVDataset(data_dir, reference_model, tokenizer,multimodal_cfg['image_token_len'], multimodal_cfg['image_processor'], multimodal_cfg['use_im_start_end'], is_llava15=True) File "/running_package/code_package/muffin/data/datasets.py", line 40, in init inference_logp(reference_model, tokenizer, hf_data, self.data_path, File "/running_package/code_package/muffin/eval/muffin_inference_logp.py", line 326, in inference_logp outputs = get_multimodal_sample_logps(model, dataloader, tokenizer, is_llava15=is_llava15) # win_logp_list, win_avg_logp_list, win_per_token_logp_list, rej_logp_list, rej_avg_logp_list, rej_per_token_logp_list File "/running_package/code_package/muffin/eval/muffin_inference_logp.py", line 226, in get_multimodal_sample_logps for batch in tqdm.tqdm(dataloader): File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/tqdm/std.py", line 1181, in iter for obj in iterable: File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 630, in next data = self._next_data() File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data return self._process_data(data) File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data data.reraise() File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/_utils.py", line 694, in reraise raise exception KeyError: Caught KeyError in DataLoader worker process 0. Original Traceback (most recent call last): File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch return self.collate_fn(data) File "/running_package/code_package/muffin/eval/muffin_inference_logp.py", line 207, in preference_collator_fn idx=win_batch['idx'] KeyError: 'idx'

0%| | 0/13752 [00:00<?, ?it/s] Traceback (most recent call last): File "/running_package/code_package/./muffin/train/train_llava15.py", line 338, in train(attn_implementation="flash_attention_2") File "/running_package/code_package/./muffin/train/train_llava15.py", line 313, in train model, data_module, tokenizer = init_model( File "/running_package/code_package/./muffin/train/train_llava15.py", line 279, in init_model data_module = make_dpo_data_module(tokenizer, data_args=data_args, reference_model=copy.deepcopy(model).cuda()) File "/running_package/code_package/./muffin/train/train_llava15.py", line 149, in make_dpo_data_module train_dataset = DPODataset(tokenizer=tokenizer, File "/running_package/code_package/./muffin/train/train_llava15.py", line 133, in init self.list_data_dict = RLAIFVDataset(data_dir, reference_model, tokenizer,multimodal_cfg['image_token_len'], multimodal_cfg['image_processor'], multimodal_cfg['use_im_start_end'], is_llava15=True) File "/running_package/code_package/muffin/data/datasets.py", line 40, in init inference_logp(reference_model, tokenizer, hf_data, self.data_path, File "/running_package/code_package/muffin/eval/muffin_inference_logp.py", line 326, in inference_logp outputs = get_multimodal_sample_logps(model, dataloader, tokenizer, is_llava15=is_llava15) # win_logp_list, win_avg_logp_list, win_per_token_logp_list, rej_logp_list, rej_avg_logp_list, rej_per_token_logp_list File "/running_package/code_package/muffin/eval/muffin_inference_logp.py", line 226, in get_multimodal_sample_logps for batch in tqdm.tqdm(dataloader): File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/tqdm/std.py", line 1181, in iter for obj in iterable: File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 630, in next data = self._next_data() File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data return self._process_data(data) File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data data.reraise() File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/_utils.py", line 694, in reraise raise exception KeyError: Caught KeyError in DataLoader worker process 0. Original Traceback (most recent call last): File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "/miniconda3/envs/llava_cu122/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch return self.collate_fn(data) File "/running_package/code_package/muffin/eval/muffin_inference_logp.py", line 207, in preference_collator_fn idx=win_batch['idx'] KeyError: 'idx'

Haoye17 commented 3 months ago

Hi @XiaoLei2123 !

We appreciate your interest in our work~

As for the error message, in the previous version of our codebase, we used this information for debugging. To fix it, you can simply comment out line 207 in /muffin/eval/muffin_inference_logp.py as indicated by the error message, or update the codebase to our latest one, we have commented it out in our latest code~

If you have any other questions, do not hesitate to ask, we are willing to help!

XiaoLei2123 commented 3 months ago

Thank you for your answer!