Closed 774911840 closed 1 year ago
Hi @774911840, thanks for your attention and appreciation of our AutoMix. We have provided training configs of AutoMix on ImageNet, and you can train ResNet variants by CUDA_VISIBLE_DEVICES=0,1,2,3 bash tools/dist_train.sh ${CONFIG_FILE} 4
.
For example, AutoMix with ResNet-18: CUDA_VISIBLE_DEVICES=0,1,2,3 bash tools/dist_train.sh configs/classification/imagenet/automix/basic/r18_l2_a2_near_lam_cat_mb_mlr1e_3_bb_mlr0.py 4
. Visualization of mixed samples will be saved in the relevant work_dirs. You can modify max_epochs
for longer training time. Feel to ask me if you have more questions.
Currently, we are busy working on other projections, thus, it might take a bit long time for us to collect pre-trained models, logs, and visualization results on ImageNet. You can refer to mixed samples of AutoMix provided on Places205, iNat2017, and iNat2018 @774911840. Please watch us for the latest update of OpenMixup
.
Hi @Lupin1998 .Thanks for answer,i know your means.Actually i am confused of openmixup/data
structure for imagenet.
I paster dataset config configs/classification/_base_/datasets/imagenet/basic_sz224_4xbs64.py
down here.
And i paste imagenet official screenshot down here.
Then i download datasets pointed by red arrows.I show these documents down here.
Training images (Task 3).I only download this training dataset because this dataset is small and i only want to test imagenet.
If i should unzip these documents and put them to one catalogue.
Validation images(all tasks).
Development kit(task3).
I don't know where to put this, where to put that.If i should unzip these documents.Thanks you very much for you answer.
Hi @774911840, I have viewed your questions above and here are my replies:
data/ImageNet/train
.data/ImageNet/val
.data/meta
.Notice that the detailed guidelines for data preparation are given in tutorial. Please refer to it for more information.
Hi @Jacky1128 ,thanks for your answer,i also want know if this method could use in semantic segmentation?
Hi, @774911840. Unfortunately, AutoMix only supports classification tasks because it requires a mixup classification loss (e.g., mixup with CE loss or BCE loss) to train the mixed sample generation process in the MixBlock
. You can try to transfer it to downstream tasks like objection detection and semantic segmentation. For example, a simple way is first to adopt a multi-label classification loss with mixup to pre-train a MixBlock
in AutoMix or SAMix, and then generate augmented images (with instance-level objects) by the given labels to perform segmentation. Moreover, mixup augmentations in OpenMixup
also support classification tasks only.
As for mixup augmentation on downstream tasks in computer vision, Mixup and CutMix might be the only two mixup techniques adopted in object detection and segmentation (e.g., Yolo.V4 uses CutMix and Mixup for detection, SegMix uses Mixup for segmentation). More recently, CycleMix tries to adopt PuzzleMix in medical image segmentation tasks. You can refer to CycleMix to adopt AutoMix on segmentation tasks.
Thanks again for using our framework. I will close this issue if there is no more question. If you have more questions, you can reopen it or start a new issue.
Hi,i run AutoMix use Cifar100 dateset,but the image is low pixel.So i want to use ImageNet,i would appreciate it if you provide some help!