maple-research-lab / EnAET

EnAET: Self-Trained Ensemble AutoEncoding Transformations for Semi-Supervised Learning
https://arxiv.org/abs/1911.09265
MIT License
81 stars 6 forks source link

the parameter of matrix transform in AET5_Improved_Mixmatch.py #6

Closed shuishiwojiade closed 3 years ago

shuishiwojiade commented 3 years ago

Could you explain how this super parameter is obtained, please! In 425 line of AET5_Improved_Mixmatch.py, the parameter of matrix transform, transforms.Normalize((0., 0., 16., 0., 0., 16., 0., 0.), (1., 1., 20., 1., 1., 20., 0.015, 0.015)). And if the data set is self-built, does this parameter need to be changed?

unlabel_dataset = AET_Memory_Dataloader(dataset_dir=data.train_path, dataset_mean=TRAIN_MEAN,dataset_std=TRAIN_STD,shift=params['shift'],degrees=params['rot'],shear=params['shear'], train_label=True,translate=(params['translate'], params['translate']), scale=(params['shrink'], params['enlarge']), fillcolor=(128, 128, 128), resample=PIL.Image.BILINEAR, matrix_transform=transforms.Compose([ transforms.Normalize((0., 0., 16., 0., 0., 16., 0., 0.), (1., 1., 20., 1., 1., 20., 0.015, 0.015)), ]), transform_pre= transform_train, rand_state=params['seed'], valid_size=0, num_classes=num_classes,extra_path=data.extra_path,patch_length=24, )

wang3702 commented 3 years ago

Personally, I did not finetune it and I just used the same parameters from AET, which is our self-supervised learning methods. I think this parameter can applied for most of the datasets directly.

shuishiwojiade commented 3 years ago

Thanks!