nodefluxio / vortex

A Deep Learning Model Development Framework for Computer Vision
27 stars 6 forks source link

[FEATURE] Refactor experiment file to follow vortex.core strcuture #7

Closed alphinside closed 4 years ago

alphinside commented 4 years ago

Is your feature request related to a problem? Please describe. Some experiment file field is not representative and somewhat also confusing, Thus need several refactor for some element

Describe the solution you'd like Proposed new strcuture

experiment_name: efficientnet_b0_softmax_cifar10

logging: None
device: 'cuda:0',
dataset: {
  train: {
    # dataset: CIFAR10,
    name: CIFAR10,
    args: {
      root: external/datasets,
      train: True,
      download: True
    },
    augmentations : [
     {
        module : albumentations,
        args : {
          transforms : [
            {compose : OneOf, args : {
              transforms : [{transform : RandomBrightnessContrast, args : {p : 0.5}},
                            {transform : RandomSnow, args : {p : 0.5}}
              ],
              p : 0.5}},
            {transform : HorizontalFlip, args : {p : 0.5}},
            {transform : RandomScale, args : {scale_limit : 0.3, p : 0.5,}}
          ],
          bbox_params : {
            min_visibility : 0.0,
            min_area : 0.0
          },
          visual_debug : False
        }
      }
    ]
  },
  eval: {
    # dataset: CIFAR10,
    name: CIFAR10,
    args: {
      root: external/datasets,
      train: False,
      download: True
    }
  },
  # dataloader: {
  #   dataloader: DataLoader,
  #   args: {
  #     num_workers: 0,
  #     batch_size: 256,
  #     shuffle: True,
  #   },
  # },
}

dataloader: {
  module: PytorchDataLoader,
  args: {
    num_workers: 0,
    batch_size: 256,
    shuffle: True,
  },
}

model: {
  name: softmax,
  network_args: {
    backbone: efficientnet_b0,
    n_classes: 10,
    pretrained_backbone: True,
  },
  preprocess_args: {
    input_size: 32,
    input_normalization: {
      mean: [0.4914, 0.4822, 0.4465],
      std: [0.2023, 0.1994, 0.2010],
      scaler: 255,
    }
  },
  loss_args: {
    reduction: mean
  },
  postprocess_args: {}
}

trainer: {
  optimizer: {
    method: SGD,
    args: {
      lr: 0.0263,
      momentum: 0.9,
      weight_decay: 0.0005,
    }
  },
  # scheduler : {
  lr_scheduler: {
    method : CosineLRScheduler,
    args : {
      t_initial : 20,
      t_mul : 1.0,
      lr_min : 0.00001,
      warmup_lr_init: 0.00001,
      warmup_t: 2,
      cycle_limit : 1,
      t_in_epochs : True,
      decay_rate : 0.1,
    }
  },
  ## Remove validation
  # validation: {
  #   args: {},
  #   val_epoch: 4,
  # },

  ## move outside
  # device: 'cuda:0',
  driver: {
    module: DefaultTrainer,
    args: {}
  },
  epoch: 20,
  save_epoch: 5
}

# Validation became `validator` args
# To be noted, currently Validation args only grep input_specs params from CLI, must be configured by default fetch from experiment file in the future
validator: {
  args: {}, 
  val_epoch: 4,
}

output_directory: experiments/outputs

exporter: {
  module: onnx,
  args: {
    opset_version: 11,
  },
}

Additional :

model with 'backbone' params should update 'pretrained' args name with 'backbone_pretrained' to avoid misconception with the pretrained version of the model itself

Describe alternatives you've considered Also now experiment file checking should be done in each pipeline, no need centralize checking. So this need to be updated. Possible delete vortex.utils.parser.parser and move checking to each of pipelines.

triwahyuu commented 4 years ago

I propose to keep the augmentation inside the designated datasets (train or eval), so that it could be more flexible rather than on the main config which could also cause some confusion.

triwahyuu commented 4 years ago

to be noted that current implementation only support augmentation on train dataset, so it would better if we also have it in eval dataset as well since some application might also need that. cc @alphinside @alifahrri

alphinside commented 4 years ago

to be noted that current implementation only support augmentation on train dataset, so it would better if we also have it in eval dataset as well since some application might also need that. cc @alphinside @alifahrri

never heard requirement to augment val dataset other than flipping for face recognition dataset. And the augmentations in train is for on the go augmentation which rely on randomness to apply augmentation which shouldnt be applied for validation dataset. In my opinion its better to encourage user augment their own validation dataset outside vortex (offline) rather than on the go (online). So currently I'm disagree with this

triwahyuu commented 4 years ago

the use case I could think of is for test time augmentation, if we would support that.

triwahyuu commented 4 years ago

issue description is updated for latest format