Refactor the directory structure by utilizing self.version.
Instead of building the dataset/checkpoint name, create one folder and build the checkpoint and version naming based on that.
For example: {model}_f{frames}-{repeated_aug}/{sampling_method}/{dataset}/
Where:
checkpoint_dir = {model}_f{frames}-{repeated_aug}
name = {sampling_method}
version = {dataset}
To store the checkpoints/models correctly, reflect the change to the models as well.
To implement this change:
checkpoint_dir = os.path.join(project_dir_path, "checkpoints", name) # remove "checkpoints"
ModelCheckpoint(
dirpath=checkpoint_dir, # update path to the log folder (including name and version)
# Train
trainer_params = {
....
"logger": TensorBoardLogger(
save_dir=checkpoint_dir, name="", version=""
),
Refactor the directory structure by utilizing
self.version
.Instead of building the dataset/checkpoint name, create one folder and build the checkpoint and version naming based on that. For example:
{model}_f{frames}-{repeated_aug}/{sampling_method}/{dataset}/
Where:
checkpoint_dir = {model}_f{frames}-{repeated_aug}
name = {sampling_method}
version = {dataset}
To store the checkpoints/models correctly, reflect the change to the models as well.
To implement this change:
https://github.com/Lightning-AI/pytorch-lightning/blob/3dcf7130c554f4511c756ccbb4e3a417103d595d/pytorch_lightning/loggers/tensorboard.py#L110-L119