Open taeil opened 3 years ago
Option 1: SEN12MS: modifying SEN12MS/classification/main_train.py for fine_tuning. Please use taeil branch to catch up.
Stuck at creating the model with same architecture. 😞
[Option 2: OpenSelfSup]
Keeping everything the same except the pre-trained model, how about we run finetune on 2000 labels and evaluate?
does the Moco pre-trained model show better comparing to supervised training?
Good idea @taeil . We can give it a try. Since the SEN12MS data has noise in terms of labeling, let us try using 2000 labels.
Good idea @taeil . We can give it a try. Since the SEN12MS data has noise in terms of labeling, let us try using 2000 labels.
okie, I will train 4 models (RGB, s1, s2, s1 & s2) with baseline training.
should run on 2000 datasets since the result is not super strong
updated to run finetune and eval at the same time.
Need to adjust the number of layers to train to see which one may have better performance. Ideally, only top layers.
New dataset is small and similar to original dataset. Since the data is small, it is not a good idea to fine-tune the ConvNet due to overfitting concerns. Since the data is similar to the original data, we expect higher-level features in the ConvNet to be relevant to this dataset as well. Hence, the best idea might be to train a linear classifier on the CNN codes.
Yeah - Sen12ms may not be the perfect evaluation because of the mislableling issue, but it's a good diagnostic to make sure our system is working properly -- and we can report the results either way.
it'll be easier if we use Sen1 and Sen2 for evaluation, and use their corresponding input_module that trained during moco pretraining.
A 200 epoch model is at: /scratch/crguest/vivid-resonance-73_sen12ms_no_aug_200epoch.pth where the wandb training is here: https://wandb.ai/cjrd/BDOpenSelfSup-tools/runs/3qjvxo2p?workspace=user-cjrd
(transfer learning) run sen12ms training on s1 and s2 two separately using same pre-trainining model
Additional Update on March 26th