OFA-Sys / OFA

Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Apache License 2.0
2.38k stars 247 forks source link

About finetuning on image captioning #346

Open victorup opened 1 year ago

victorup commented 1 year ago

Hi, I want to finetune the model on my own dataset. How should I prepare the stage1 and stage2 training data? What is the difference? The description of caption_stage1_train.tsv and caption_stage2_train.tsv is “each image corresponds to only 1 caption in caption_stage1_train.tsv and corresponds to multiple captions in other TSV files”, but I find the caption_stage1_train.tsv is large than caption_stage2_train.tsv. My dataset is just a lot of image-text pairs (one-one). Thanks!

victorup commented 1 year ago

And another question is when I run sh train_caption_stage1.sh, there are some errors occur:

Traceback (most recent call last): File "/data33/private/xinpeng/codebase/OFA/trainer.py", line 511, in load_checkpoint self.model.load_state_dict( File "/data33/private/xinpeng/codebase/OFA/fairseq/fairseq/models/fairseq_model.py", line 125, in load_state_dict return super().load_state_dict(new_state_dict, strict) File "/home/xinpeng/miniconda3/envs/ofa/lib/python3.9/site-packages/torch/nn/modules/module.py" , line 1482, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for OFAModel: Unexpected key(s) in state_dict: "encoder.layers.0.attn_ln.weight", "encoder.layers.0.att n_ln.bias", "encoder.layers.0.ffn_layernorm.weight", "encoder.layers.0.ffn_layernorm.bias", "enco der.layers.0.self_attn.c_attn", "encoder.layers.1.attn_ln.weight", "encoder.layers.1.attn_ln.bias ", "encoder.layers.1.ffn_layernorm.weight", "encoder.layers.1.ffn_layernorm.bias", "encoder.layer s.1.self_attn.c_attn", "encoder.layers.2.attn_ln.weight", "encoder.layers.2.attn_ln.bias", "encod er.layers.2.ffn_layernorm.weight", "encoder.layers.2.ffnlayernorm.bias", "encoder.layers.2.self attn.c_attn", "encoder.layers.3.attn_ln.weight", "encoder.layers.3.attn_ln.bias", "encoder.layers .3.ffn_layernorm.weight", "encoder.layers.3.ffn_layernorm.bias", "encoder.layers.3.self_attn.c_at tn", "encoder.layers.4.attn_ln.weight", "encoder.layers.4.attn_ln.bias", "encoder.layers.4.ffn_la yernorm.weight", "encoder.layers.4.ffn_layernorm.bias", "encoder.layers.4.self_attn.c_attn", "enc oder.layers.5.attn_ln.weight", "encoder.layers.5.attn_ln.bias", "encoder.layers.5.ffn_layernorm.w eight", "encoder.layers.5.ffn_layernorm.bias", "encoder.layers.5.self_attn.c_attn", "encoder.laye rs.6.attn_ln.weight", "encoder.layers.6.attn_ln.bias", "encoder.layers.6.ffn_layernorm.weight", " encoder.layers.6.ffn_layernorm.bias", "encoder.layers.6.self_attn.c_attn", "encoder.layers.7.attn _ln.weight", "encoder.layers.7.attn_ln.bias", "encoder.layers.7.ffn_layernorm.weight", "encoder.l ayers.7.ffn_layernorm.bias", "encoder.layers.7.self_attn.c_attn", "encoder.layers.8.attn_ln.weigh_ln.weight", "encoder.layers.7.attn_ln.bias", "encoder.layers.7.ffn_layernorm.weight", "[38/1836] ayers.7.ffn_layernorm.bias", "encoder.layers.7.self_attn.c_attn", "encoder.layers.8.attn_ln.weigh t", "encoder.layers.8.attn_ln.bias", "encoder.layers.8.ffn_layernorm.weight", "encoder.layers.8.f fn_layernorm.bias", "encoder.layers.8.self_attn.c_attn", "encoder.layers.9.attn_ln.weight", "enco der.layers.9.attn_ln.bias", "encoder.layers.9.ffn_layernorm.weight", "encoder.layers.9.ffn_layern orm.bias", "encoder.layers.9.self_attn.c_attn", "encoder.layers.10.attn_ln.weight", "encoder.laye rs.10.attn_ln.bias", "encoder.layers.10.ffn_layernorm.weight", "encoder.layers.10.ffn_layernorm.b ias", "encoder.layers.10.self_attn.c_attn", "encoder.layers.11.attn_ln.weight", "encoder.layers.1 1.attn_ln.bias", "encoder.layers.11.ffn_layernorm.weight", "encoder.layers.11.ffn_layernorm.bias" , "encoder.layers.11.self_attn.c_attn", "decoder.layers.0.self_attn_ln.weight", "decoder.layers.0 .self_attn_ln.bias", "decoder.layers.0.cross_attn_ln.weight", "decoder.layers.0.cross_attn_ln.bia s", "decoder.layers.0.ffn_layernorm.weight", "decoder.layers.0.ffn_layernorm.bias", "decoder.laye rs.0.self_attn.c_attn", "decoder.layers.0.encoder_attn.c_attn", "decoder.layers.1.self_attn_ln.we ight", "decoder.layers.1.self_attn_ln.bias", "decoder.layers.1.cross_attn_ln.weight", "decoder.la yers.1.cross_attn_ln.bias", "decoder.layers.1.ffn_layernorm.weight", "decoder.layers.1.ffn_layern orm.bias", "decoder.layers.1.self_attn.c_attn", "decoder.layers.1.encoder_attn.c_attn", "decoder. layers.2.self_attn_ln.weight", "decoder.layers.2.self_attn_ln.bias", "decoder.layers.2.cross_attn _ln.weight", "decoder.layers.2.cross_attn_ln.bias", "decoder.layers.2.ffn_layernorm.weight", "dec oder.layers.2.ffn_layernorm.bias", "decoder.layers.2.self_attn.c_attn", "decoder.layers.2.encoder _attn.c_attn", "decoder.layers.3.self_attn_ln.weight", "decoder.layers.3.self_attn_ln.bias", "dec oder.layers.3.cross_attn_ln.weight", "decoder.layers.3.cross_attn_ln.bias", "decoder.layers.3.ffn _layernorm.weight", "decoder.layers.3.ffn_layernorm.bias", "decoder.layers.3.self_attn.c_attn", " decoder.layers.3.encoder_attn.c_attn", "decoder.layers.4.self_attn_ln.weight", "decoder.layers.4. self_attn_ln.bias", "decoder.layers.4.cross_attn_ln.weight", "decoder.layers.4.cross_attn_ln.bias ", "decoder.layers.4.ffn_layernorm.weight", "decoder.layers.4.ffn_layernorm.bias", "decoder.layer s.4.self_attn.c_attn", "decoder.layers.4.encoder_attn.c_attn", "decoder.layers.5.self_attn_ln.wei ght", "decoder.layers.5.self_attn_ln.bias", "decoder.layers.5.cross_attn_ln.weight", "decoder.lay ers.5.cross_attn_ln.bias", "decoder.layers.5.ffn_layernorm.weight", "decoder.layers.5.ffn_layern$ rm.bias", "decoder.layers.5.self_attn.c_attn", "decoder.layers.5.encoder_attn.c_attn", "decoder.l ayers.6.self_attn_ln.weight", "decoder.layers.6.self_attn_ln.bias", "decoder.layers.6.crossattn ln.weight", "decoder.layers.6.cross_attn_ln.bias", "decoder.layers.6.ffn_layernorm.weight", "deco der.layers.6.ffn_layernorm.bias", "decoder.layers.6.self_attn.cattn", "decoder.layers.6.encoder attn.c_attn", "decoder.layers.7.self_attn_ln.weight", "decoder.layers.7.self_attn_ln.bias", "deco der.layers.7.cross_attn_ln.weight", "decoder.layers.7.cross_attnln.bias", "decoder.layers.7.ffn layernorm.weight", "decoder.layers.7.ffn_layernorm.bias", "decoder.layers.7.self_attn.c_attn", "d ecoder.layers.7.encoder_attn.c_attn", "decoder.layers.8.self_attn_ln.weight", "decoder.layers.8.s layernorm.weight", "decoder.layers.7.ffn_layernorm.bias", "decoder.layers.7.self_attn.c_a[4/1836] ecoder.layers.7.encoder_attn.c_attn", "decoder.layers.8.self_attn_ln.weight", "decoder.layers.8.s elf_attn_ln.bias", "decoder.layers.8.cross_attn_ln.weight", "decoder.layers.8.cross_attn_ln.bias" , "decoder.layers.8.ffn_layernorm.weight", "decoder.layers.8.ffn_layernorm.bias", "decoder.layers .8.self_attn.c_attn", "decoder.layers.8.encoder_attn.c_attn", "decoder.layers.9.self_attn_ln.weig ht", "decoder.layers.9.self_attn_ln.bias", "decoder.layers.9.cross_attn_ln.weight", "decoder.laye rs.9.cross_attn_ln.bias", "decoder.layers.9.ffn_layernorm.weight", "decoder.layers.9.ffn_layernor m.bias", "decoder.layers.9.self_attn.c_attn", "decoder.layers.9.encoder_attn.c_attn", "decoder.la yers.10.self_attn_ln.weight", "decoder.layers.10.self_attn_ln.bias", "decoder.layers.10.cross_att n_ln.weight", "decoder.layers.10.cross_attn_ln.bias", "decoder.layers.10.ffn_layernorm.weight", " decoder.layers.10.ffn_layernorm.bias", "decoder.layers.10.self_attn.c_attn", "decoder.layers.10.e ncoder_attn.c_attn", "decoder.layers.11.self_attn_ln.weight", "decoder.layers.11.self_attn_ln.bia s", "decoder.layers.11.cross_attn_ln.weight", "decoder.layers.11.cross_attn_ln.bias", "decoder.la yers.11.ffn_layernorm.weight", "decoder.layers.11.ffn_layernorm.bias", "decoder.layers.11.self_at tn.c_attn", "decoder.layers.11.encoder_attn.c_attn".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/data33/private/xinpeng/codebase/OFA/run_scripts/caption/../../train.py", line 528, in <m odule>
cli_main()
File "/data33/private/xinpeng/codebase/OFA/run_scripts/caption/../../train.py", line 521, in cl i_main
distributed_utils.call_main(cfg, main)
File "/data33/private/xinpeng/codebase/OFA/fairseq/fairseq/distributed/utils.py", line 389, in call_main
main(cfg, **kwargs)
File "/data33/private/xinpeng/codebase/OFA/run_scripts/caption/../../train.py", line 157, in ma in
extra_state, epoch_itr = checkpoint_utils.load_checkpoint(
File "/data33/private/xinpeng/codebase/OFA/utils/checkpoint_utils.py", line 249, in load_checkp oint
extra_state = trainer.load_checkpoint(
File "/data33/private/xinpeng/codebase/OFA/trainer.py", line 524, in load_checkpoint
raise Exception(
Exception: Cannot load model parameters from checkpoint ../../checkpoints/ofa_large.pt; please en sure that the architectures match.

I didn't modify the model. Please help me know how to fix this.

Thanks!

victorup commented 1 year ago

I also try to run sh train_caption_stage1_base.sh, but I also met some errors:

Traceback (most recent call last):
File "/data33/private/xinpeng/codebase/OFA/run_scripts/caption/../../train.py", line 528, in <m odule>
cli_main()
File "/data33/private/xinpeng/codebase/OFA/run_scripts/caption/../../train.py", line 521, in cl i_main
distributed_utils.call_main(cfg, main)
File "/data33/private/xinpeng/codebase/OFA/fairseq/fairseq/distributed/utils.py", line 389, in call_main
main(cfg, *kwargs)
File "/data33/private/xinpeng/codebase/OFA/run_scripts/caption/../../train.py", line 190, in ma in
valid_losses, should_stop = train(cfg, trainer, task, epoch_itr)
File "/home/xinpeng/miniconda3/envs/ofa/lib/python3.9/contextlib.py", line 79, in inner
return func(
args, **kwds)
File "/data33/private/xinpeng/codebase/OFA/run_scripts/caption/../../train.py", line 297, in tr ain
for i, samples in enumerate(progress):
File "/data33/private/xinpeng/codebase/OFA/fairseq/fairseq/logging/progress_bar.py", line 261, in iter
for i, obj in enumerate(self.iterable, start=self.n):
File "/data33/private/xinpeng/codebase/OFA/fairseq/fairseq/data/iterators.py", line 56, in ne xt
x = next(self._itr)
File "/data33/private/xinpeng/codebase/OFA/fairseq/fairseq/data/iterators.py", line 509, in _ch x = next(self._itr) [12/1988] File "/data33/private/xinpeng/codebase/OFA/fairseq/fairseq/data/iterators.py", line 509, in _ch unk_iterator
for x in itr:
File "/data33/private/xinpeng/codebase/OFA/fairseq/fairseq/data/iterators.py", line 56, in ne xt
x = next(self._itr)
File "/data33/private/xinpeng/codebase/OFA/fairseq/fairseq/data/iterators.py", line 637, in n ext
raise item
File "/data33/private/xinpeng/codebase/OFA/fairseq/fairseq/data/iterators.py", line 567, in run for item in self._source:
File "/home/xinpeng/miniconda3/envs/ofa/lib/python3.9/site-packages/torch/utils/data/dataloader .py", line 521, in next
data = self._next_data()
File "/home/xinpeng/miniconda3/envs/ofa/lib/python3.9/site-packages/torch/utils/data/dataloader .py", line 1203, in _next_data
return self._process_data(data)
File "/home/xinpeng/miniconda3/envs/ofa/lib/python3.9/site-packages/torch/utils/data/dataloader .py", line 1229, in _process_data
data.reraise()
File "/home/xinpeng/miniconda3/envs/ofa/lib/python3.9/site-packages/torch/_utils.py", line 434, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/xinpeng/miniconda3/envs/ofa/lib/python3.9/site-packages/torch/utils/data/_utils/wor ker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/xinpeng/miniconda3/envs/ofa/lib/python3.9/site-packages/torch/utils/data/_utils/fet ch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/xinpeng/miniconda3/envs/ofa/lib/python3.9/site-packages/torch/utils/data/_utils/fet ch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/data33/private/xinpeng/codebase/OFA/data/mm_data/caption_dataset.py", line 117, in getitem
uniq_id, image, caption = self.dataset[index]
File "/data33/private/xinpeng/codebase/OFA/data/file_dataset.py", line 108, in getitem
column_l = [dtype(column_l[col_id]) for col_id, dtype in zip(self.selected_col_ids, self.dtyp es)]
File "/data33/private/xinpeng/codebase/OFA/data/file_dataset.py", line 108, in
column_l = [dtype(column_l[col_id]) for col_id, dtype in zip(self.selected_col_ids, self.dtyp es)]
IndexError: list index out of range

I have processed the data according to https://github.com/OFA-Sys/OFA/issues/91#issuecomment-1114626371, but I still encountered problems.

yangjianxin1 commented 1 year ago

Maybe you can try the repository,I succeed to use transformers to train OFA model and inference: https://github.com/yangjianxin1/OFA-Chinese