Open 23pointsNorth opened 1 year ago
Hi there.
Regarding YoloNAS + segmentation I don't have an answer for you at the moment whether will or will not it will be added. In theory one can build such model relatively easy, however I doubt it would be that accurate/efficient compared to models purposely built for low-latency image segmentation.
On the issues you are having - the choice of BinaryIOU / BinarySegmentationVisualizationCallback metric / visualization callback looks wrong to me. As name suggest these classes defined for binary segmentation task (single class). In case of COCO you are dealing with multiclass segmentation and therefore should be using metric class designed for that task: IoU
For future reference for multiclass segmentation with COCO dataset this may be a good starting point.
import os
from super_gradients import Trainer, setup_device
from super_gradients.training import MultiGPUMode, dataloaders, models
from super_gradients.common.object_names import (
Losses,
LRWarmups,
LRSchedulers,
Optimizers,
Metrics,
Models,
)
from super_gradients.training.metrics.segmentation_metrics import IoU
from super_gradients.training.utils.callbacks import (
BinarySegmentationVisualizationCallback,
Phase,
)
BATCH_SIZE = 8
EPOCHS = 5
CHECKPOINT_DIR = "ckpts/"
os.makedirs(CHECKPOINT_DIR, exist_ok=True)
setup_device(device="cuda", multi_gpu=MultiGPUMode.AUTO)
trainer = Trainer(
experiment_name="segmentation_quick_start", ckpt_root_dir=CHECKPOINT_DIR
)
train_loader = dataloaders.coco_segmentation_train(
dataset_params={"root_dir": "data/coco"},
dataloader_params={"batch_size": BATCH_SIZE},
)
valid_loader = dataloaders.coco_segmentation_val(
dataset_params={"root_dir": "data/coco"},
dataloader_params={"batch_size": BATCH_SIZE},
)
print("Dataloader parameters:", train_loader.dataloader_params)
print("Dataset parameters", train_loader.dataset.dataset_params)
model = models.get(
model_name=Models.PP_LITE_T_SEG,
num_classes=len(train_loader.dataset.classes),
)
train_params = {
# ENABLING SILENT MODE
# "silent_mode": True,
"average_best_models": True,
"warmup_mode": LRWarmups.LINEAR_EPOCH_STEP,
"warmup_initial_lr": 1e-6,
"lr_warmup_epochs": 3,
"initial_lr": 5e-4,
"lr_mode": LRSchedulers.COSINE,
"cosine_final_lr_ratio": 0.1,
"optimizer": Optimizers.ADAMW,
"optimizer_params": {"weight_decay": 0.0001},
"zero_weight_decay_on_bias_and_bn": True,
"ema": True,
"ema_params": {"decay": 0.9, "decay_type": "threshold"},
"max_epochs": EPOCHS,
# "mixed_precision": True, # Allow for CPU/GPU testing
"loss": Losses.CROSS_ENTROPY,
"train_metrics_list": [IoU(num_classes=len(train_loader.dataset.classes))],
"valid_metrics_list": [IoU(num_classes=len(train_loader.dataset.classes))],
"metric_to_watch": Metrics.IOU,
"phase_callbacks": [
BinarySegmentationVisualizationCallback(
phase=Phase.VALIDATION_BATCH_END, freq=1, last_img_idx_in_batch=4
)
],
"loss_logging_items_names": ["loss"],
}
trainer.train(
model=model,
training_params=train_params,
train_loader=train_loader,
valid_loader=valid_loader,
)
The COCO Segmentation dataloader always redirects to CoCoSegmentationDataSet and results in the error : 'CoCoSegmentationDataSet' object has no attribute 'samples_dir_suffix'
.
Using the CoCoSegmentationDataSet to create a dataset will also always result in the same error.
The documentation in the code for CoCoSegmentationDataSet is also wrong as it says the usage as :
but the init for CoCoSegmentationDataSet requires root_dir
.
Could you please suggest a working example for COCO Segmentation dataset?
💡 Your Question
I am trying to recreate a minimal example for training a segmentation network (is YOLO-NAS going to support this in the future?) with COCO dataset.
I am successfully loading the dataset, but having issues during training. Relevant code based on the supervisely segmentation quickstart notebook:
Error message around different side between model output and dataset target
Would love some help resoling this, plus makes for a great MVP/example.
Versions