Closed ghost closed 5 years ago
What is your test command? Do you test your own trained model? Could you print the shape of x
at this line File "/home/gaozhihua/program/mmdetection/mmdet/models/backbones/ssd_vgg.py", line 83, in forward x = F.relu(layer(x), inplace=True)
?
import mmcv
from mmcv.runner import load_checkpoint
from mmdet.models import build_detector
from mmdet.apis import inference_detector, show_result
cfg = mmcv.Config.fromfile('configs/pascal_voc/ssd300_voc.py') cfg.model.pretrained = None
model = build_detector(cfg.model, test_cfg=cfg.testcfg) = load_checkpoint(model, 'models/ssd300_voc_vgg16_caffe_240e_20181221-2f05dd40.pth')
img = mmcv.imread('/home/gaozhihua/program/mmdetection/data/face_detection/wider_face/0_Parade_marchingband_1_5.jpg') result = inference_detector(model, img, cfg) show_result(img, result)
Could have a try to rescale the img to (1, 3, 300, 300)?
the test cfg set
resize_keep_ratio=True
But ssd is full conv detection network.... Is that affect?
Ok, I have a try...
But the shape is strange, height and width should be same
resize_keep_ratio
indicate when resize if need to keep original ratio, but in SSD it should be False
Yeah, it is ok now when i do not keep original ratio.... But I am confused that ssd is a full conv network... Even if i keep the ratio it shouldn't got wrong....
Here is my guess, in SSD last feat map size may smaller than kernel size if you set resize_keep_ratio=True
, in this case will report this error, here is an example:
x = torch.rand(1, 3, 3, 2).cuda()
conv = nn.Conv2d(3, 3, 3).cuda()
conv(x)
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM
I think that is the problem... And I have a pr to fix the inference bug...
I have the same error while training a convtranspose2D. Can't seem to understand the issue. The kernel size in my case is also way less than the input size and no padding is used.
Had this error because of the wrong dtype
SSD training is OK, but when i inference the model I got this problem... However the retinanet is Ok..