Closed Gavin-debug closed 2 years ago
I think you can use the visualization code of DAB-DETR, as our model is the same as DAB without denoising. Thank you.
I used dn_dab_deformable_detr, and when I use the visualization code of DAB-DETR, I changed the mode_config and model path to my own, however, It caused error "cannot unpack non-iterable NoneType object" at this line:"output=model(image[None])"
I used dn_dab_deformable_detr, and when I use the visualization code of DAB-DETR, I changed the mode_config and model path to my own, however, It caused error "cannot unpack non-iterable NoneType object" at this line:"output=model(image[None])"
I also changed the the build_DABDETR(args) into build_dab_deformable_detr as written in main.py
I used dn_dab_deformable_detr, and when I use the visualization code of DAB-DETR, I changed the mode_config and model path to my own, however, It caused error "cannot unpack non-iterable NoneType object" at this line:"output=model(image[None])"
I also changed the the build_DABDETR(args) into build_dab_deformable_detr as written in main.py
I followed the traceback and found that when code ran to this line "targets, scalar, label_noise_scale, box_noise_scale, num_patterns = dn_args" in /models/dn_dab_deformable_detr/dn_components.py, the variable dn_args is None
I seem to find the reason why the visualization code of DAB-DETR doesn't work on dn_dab_deformable_detr: The visualization code seems to put the visualization on CPU, however, when build_dab_deformable_detr(), there are some variables being putting on GPU. I don't know whether my thought is true or false.
Got it. Have you solved this problem?
Got it. Have you solved this problem? Not yet, I tried to put the visualization phase to GPU but failed, and I am now trying to add visualization in the function evaluate().
Got it. Have you solved this problem?
I have solved the problem, thank you for your explaination.
Top. Can we close the issue?
Got it. Have you solved this problem?
I have solved the problem, thank you for your explaination.
hello, I meet the same problem as you, how do you slove it? thank you for you help!
Got it. Have you solved this problem?
I have solved the problem, thank you for your explaination.
hello, I meet the same problem as you, how do you slove it? thank you for you help!
I make a copy of input, and used the dataloader in main.py to do prediction, here are some of the code.
Got it. Have you solved this problem?
I have solved the problem, thank you for your explaination.
hello, I meet the same problem as you, how do you slove it? thank you for you help!
I make a copy of input, and used the dataloader in main.py to do prediction, here are some of the code.
for image, targets, path in data_loader_val:
path_s = str(path).replace("(\'","").replace("\',)","")
path_info = '/archive/hot12/avi_project/dataset/Dataset2.0_lingjian_cocoformat/test/'+path_s
path_list.append(path_info)
image = torch.squeeze(image, 0)
image_gpu = image.to(device)
for t in targets:
targets[t] = torch.squeeze(targets[t], 0)
targets = [{t: to_device(targets[t], device) for t in targets}]
# box_label = [id2name[int(item)] for item in targets[0]['labels']]
gt_dict = {
'boxes': targets[0]['boxes'],
'image_id': targets[0]['image_id'],
'size': targets[0]['size'],
'box_label': targets[0]['labels'],
}
# n_g, b_g = gt_dict['boxes'].shape
# for i in range(n_g):
# gt_list = []
# for j in range(b_g):
# gt_list.append(gt_dict['boxes'][i][j].cpu())
# gt_list.append(gt_dict['image_id'][0].cpu())
# gt_list.append(gt_dict['box_label'][i].cpu())
# gt_info.append(gt_list)
vslzr = COCOVisualizer()
gt_dir = '/home/wgd@corp.sse.tongji.edu.cn/DN-DETR/visual/gt'
vslzr.visualize(image, gt_dict, savedir=gt_dir)
model.eval()
output = model(image_gpu[None])
output = output[0]
output = postprocessors['bbox'](output, torch.Tensor([[1.0, 1.0]]).to(device))[0]
thershold = 0.25 # set a thershold
scores = output['scores']
labels = output['labels']
boxes = box_ops.box_xyxy_to_cxcywh(output['boxes'])
select_mask = scores > thershold
# box_label = [id2name[int(item)] for item in labels[select_mask]]
pred_dict = {
'boxes': boxes[select_mask],
'size': targets[0]['size'],
'conf': output['scores'],
'image_id': targets[0]['image_id'],
'box_label': labels[select_mask]
}
# n, b = pred_dict['boxes'].shape
# for i in range(n):
# pred_list = []
# for j in range(b):
# pred_list.append(pred_dict['boxes'][i][j].cpu())
# pred_list.append(pred_dict['conf'][i].cpu())
# pred_list.append(pred_dict['image_id'][0].cpu())
# pred_list.append(pred_dict['box_label'][i].cpu())
# pred_info.append(pred_list)
pred_dir = '/home/wgd@corp.sse.tongji.edu.cn/DN-DETR/visualizeisual/pred'
vslzr.visualize(image, pred_dict, savedir=pred_dir)
NOTICE: I changed the return value of dataset, so there is an another path in the for sentence, you can just ignore it. IMPORTANT: when do visualiazation, you need to change line 65 "if num_patterns == 0" to "if num_patterns == None" in dn_components.py
Got it. Have you solved this problem?
I have solved the problem, thank you for your explaination.
hello, I meet the same problem as you, how do you slove it? thank you for you help!
I make a copy of input, and used the dataloader in main.py to do prediction, here are some of the code.
for image, targets, path in data_loader_val: path_s = str(path).replace("(\'","").replace("\',)","") path_info = '/archive/hot12/avi_project/dataset/Dataset2.0_lingjian_cocoformat/test/'+path_s path_list.append(path_info) image = torch.squeeze(image, 0) image_gpu = image.to(device) for t in targets: targets[t] = torch.squeeze(targets[t], 0) targets = [{t: to_device(targets[t], device) for t in targets}] # box_label = [id2name[int(item)] for item in targets[0]['labels']] gt_dict = { 'boxes': targets[0]['boxes'], 'image_id': targets[0]['image_id'], 'size': targets[0]['size'], 'box_label': targets[0]['labels'], } # n_g, b_g = gt_dict['boxes'].shape # for i in range(n_g): # gt_list = [] # for j in range(b_g): # gt_list.append(gt_dict['boxes'][i][j].cpu()) # gt_list.append(gt_dict['image_id'][0].cpu()) # gt_list.append(gt_dict['box_label'][i].cpu()) # gt_info.append(gt_list) vslzr = COCOVisualizer() gt_dir = '/home/wgd@corp.sse.tongji.edu.cn/DN-DETR/visual/gt' vslzr.visualize(image, gt_dict, savedir=gt_dir) model.eval() output = model(image_gpu[None]) output = output[0] output = postprocessors['bbox'](output, torch.Tensor([[1.0, 1.0]]).to(device))[0] thershold = 0.25 # set a thershold scores = output['scores'] labels = output['labels'] boxes = box_ops.box_xyxy_to_cxcywh(output['boxes']) select_mask = scores > thershold # box_label = [id2name[int(item)] for item in labels[select_mask]] pred_dict = { 'boxes': boxes[select_mask], 'size': targets[0]['size'], 'conf': output['scores'], 'image_id': targets[0]['image_id'], 'box_label': labels[select_mask] } # n, b = pred_dict['boxes'].shape # for i in range(n): # pred_list = [] # for j in range(b): # pred_list.append(pred_dict['boxes'][i][j].cpu()) # pred_list.append(pred_dict['conf'][i].cpu()) # pred_list.append(pred_dict['image_id'][0].cpu()) # pred_list.append(pred_dict['box_label'][i].cpu()) # pred_info.append(pred_list) pred_dir = '/home/wgd@corp.sse.tongji.edu.cn/DN-DETR/visualizeisual/pred' vslzr.visualize(image, pred_dict, savedir=pred_dir)
NOTICE: I changed the return value of dataset, so there is an another path in the for sentence, you can just ignore it. IMPORTANT: when do visualiazation, you need to change line 65 "if num_patterns == 0" to "if num_patterns == None" in dn_components.py Thank for your help. I have solved the problem.
it seems sth wrong when use visualization of DAB-DETR