illuin-tech / colpali

The code used to train and run inference with the ColPali architecture.
https://huggingface.co/vidore
MIT License
803 stars 70 forks source link

Error: Processor should be provided for vision collator #89

Open vicaasas opened 3 days ago

vicaasas commented 3 days ago

When executing the following command: USE_LOCAL_DATASET=0 python scripts/train/train_colbert.py scripts/configs/pali/train_colpali_docmatix_model.yaml I encounter the following error message: Processor should be provided for vision collator My pretrained_model_name_or_path is: /tmp2/vik/cache/models--vidore--colpaligemma-3b-mix-448-base/snapshots/6ff0d944ea09c3ead97d2bc57427e3d4f01d192f What might I be missing?

ManuelFay commented 3 days ago

Do you mind sharing your config file ?

vicaasas commented 2 days ago

Sorry for the delayed reply by a few days. my train_colpali_docmatix_model.yaml, /tmp2/vik/colpali/scripts/configs/pali/train_colpali_docmatix_model.yaml

config:
  (): colpali_engine.trainer.colmodel_training.ColModelTrainingConfig
  output_dir: !path ../../../models/train_colpali-docmatix-3b-mix-448
  processor:
    () : colpali_engine.utils.transformers_wrappers.AutoProcessorWrapper
    pretrained_model_name_or_path:  "/tmp2/vik/cache/models--vidore--colpaligemma-3b-mix-448-base/snapshots/6ff0d944ea09c3ead97d2bc57427e3d4f01d192f" # "./models/paligemma-3b-mix-448"
    max_length: 50
  model:
    (): colpali_engine.utils.transformers_wrappers.AllPurposeWrapper
    class_to_instanciate: !ext colpali_engine.models.ColPali
    pretrained_model_name_or_path: "/tmp2/vik/cache/models--vidore--colpaligemma-3b-mix-448-base/snapshots/6ff0d944ea09c3ead97d2bc57427e3d4f01d192f"
    torch_dtype:  !ext torch.bfloat16
#    quantization_config:
#      (): transformers.BitsAndBytesConfig
#      load_in_4bit: true
#      bnb_4bit_quant_type: "nf4"
#      bnb_4bit_compute_dtype:  "bfloat16"
#      bnb_4bit_use_double_quant: true

  dataset_loading_func: !ext colpali_engine.utils.dataset_transformation.load_train_set_with_docmatix
  eval_dataset_loader: !import ../data/test_data.yaml

  max_length: 50
  run_eval: true
  add_suffix: true
  loss_func:
    (): colpali_engine.loss.late_interaction_losses.ColbertPairwiseCELoss
  tr_args: !import ../tr_args/default_tr_args.yaml
  peft_config:
    (): peft.LoraConfig
    r: 32
    lora_alpha: 32
    lora_dropout: 0.1
    init_lora_weights: "gaussian"
    bias: "none"
    task_type: "FEATURE_EXTRACTION"
    target_modules: '(.*(language_model).*(down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj|o_proj).*$|.*(custom_text_proj).*$)'
    # target_modules: '(.*(language_model).*(down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj|o_proj).*$|.*(custom_text_proj).*$)'