lyuwenyu / RT-DETR

[CVPR 2024] Official RT-DETR (RTDETR paddle pytorch), Real-Time DEtection TRansformer, DETRs Beat YOLOs on Real-time Object Detection. 🔥 🔥 🔥
Apache License 2.0
2.61k stars 303 forks source link

there is any reason torch.autocast(enabled=False) in training? #424

Closed int11 closed 2 months ago

int11 commented 2 months ago

i assume that AMP logic is don't work even if use_amp=True or scaler is not None. because torch.autocast enabled flag is always False

https://github.com/lyuwenyu/RT-DETR/blob/main/rtdetrv2_pytorch/src/solver/det_engine.py#L48

if scaler is not None:
    with torch.autocast(device_type=str(device), cache_enabled=True):
    outputs = model(samples, targets=targets)

    with torch.autocast(device_type=str(device), enabled=False):
        loss_dict = criterion(outputs, targets, **metas)

i check that training speed much faster after fix code There is any reason that always enabled flag is False? is that bug?

lyuwenyu commented 2 months ago

It's used to make sure pure float32 for loss during the criterion phase. And I'm not sure the mAP result when enabled=True. Can you check it?

int11 commented 2 months ago

I checked and it doesn't seem to mean much to speed impact. I think it's my mistake.