Classification with pytorch lightning(as PL)
RepoRootPath
├── models # python module for training models
├── datamodules # python module for pl data module
├── transforms # python module for data preprocessing
├── main.py # Trainer
├── main.sh # Training Recipe script
└── ... # ETC ...
models
├── LitBase # PL module base
│ └── lightning_model.py
├── Model_1 # Model 1
│ ├── blocks.py # Models sub blocks
│ ├── models.py # Pure pytorch model define
│ └── lightning_model.py # Loss and optimizer setting using PL
├── Model_2
├── Model_N
...
# models.LitBase.lightning_model.py
class LitBase(pl.LightningModule, metaclass=ABCMeta):
@abstractmethod
def configure_optimizers(self):
return super().configure_optimizers()
"""
def initialize_weights ...
def forward ...
def training_step ...
def validation_step ...
def test_step ...
def _validation_test_common_epoch_end ...
def validation_epoch_end ...
def test_epoch_end ...
"""
# models.LeNet5.lightning_model.py
class LitLeNet5(LitBase):
def __init__(self, args):
super().__init__()
self.save_hyperparameters(args)
self.model = LeNet5(
image_channels=self.hparams.image_channels,
num_classes=self.hparams.num_classes,
)
self.loss = nn.CrossEntropyLoss()
def configure_optimizers(self):
return optim.Adam(self.parameters(), lr=self.hparams.lr)
$ conda env create --file environment.yaml
$ pip install -r requirements.txt
$ export USERID=$(id -u)
$ export GROUPID=$(id -g)
$ docker-compose up -d
version: "3.7"
trainer:
build: .
user: "${USERID}:${GROUPID}"
volumes:
- .:/training
- /{YOUR_DATA_SET_DIR_PATH}:/DATASET # !!Setting dataset path!!
command: tail -f /dev/null
Please see the "Recipes"
Please see the "Experiment results"
Please see the "Supported Model"
Please see the "Supported Dataset"