Open monkeycc opened 2 years ago
Hi! sorry for my late reply. Actually, we suggest using mim as the general training entry point.
# Train models on a single server with CPU by setting `gpus` to 0 and
# 'launcher' to 'none' (if applicable). The training script of the
# corresponding codebase will fail if it doesn't support CPU training.
> mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0
# Train models on a single server with one GPU
> mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1
# Train models on a single server with 4 GPUs and pytorch distributed
> mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 4 \
--launcher pytorch
# Train models on a slurm HPC with one 8-GPU node
> mim train mmcls resnet101_b16x8_cifar10.py --launcher slurm --gpus 8 \
--gpus-per-node 8 --partition partition_name --work-dir tmp
# Print help messages of sub-command train
> mim train -h
# Print help messages of sub-command train and the training script of mmcls
> mim train mmcls -h
MMEngine is a more general training architecture not only for OpenMMLab, anyone could use MMEngine to setup their task.
Describe the feature
Motivation
It integrates image classification, object detection, semantic segmentation and instance segmentation in the visual field
Classification target semantic instances need these trainings and configs
I want to do all model training through mmengine
It is recommended to add the configs folder in mmengine
It contains the classification target semantic instances folder
Respective model scripts
Related resources
https://github.com/PaddlePaddle/PaddleX
https://github.com/PaddlePaddle/PaddleX/tree/develop/tutorials/train
Additional context Add any other context or screenshots about the feature request here. If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.