huacong / ReconBoost

ICML2024-ReconBoost: Boosting Can Achieve Modality Reconcilement
11 stars 1 forks source link

Pre-trained Models and Training Stages for MOSI and MOSEI Datasets #1

Open iokaff opened 1 month ago

iokaff commented 1 month ago

Hello,

I have a couple of questions regarding the training process for the MOSI and MOSEI datasets:

  1. Do you use pre-trained models for the MOSI and MOSEI datasets, or are they trained from scratch in your implementation?
  2. When running the train_MSA.py file, in the while check_status(stage) loop, how many stages do you typically use for MOSI and MOSEI? Is it 1001, or is there a different recommended number of stages for these datasets?

Thank you in advance for your help!

huacong commented 23 hours ago

Hello,

I have a couple of questions regarding the training process for the MOSI and MOSEI datasets:

  1. Do you use pre-trained models for the MOSI and MOSEI datasets, or are they trained from scratch in your implementation?
  2. When running the train_MSA.py file, in the while check_status(stage) loop, how many stages do you typically use for MOSI and MOSEI? Is it 1001, or is there a different recommended number of stages for these datasets?

Thank you in advance for your help!

Thanks for your interest in our work!

1) For the MOSI and MOSEI datasets, we feed the customized multimodal features extracted by the MMSA-FET toolkit to train our model from scratch. 2) I'd recommend 100 for that.