Open Durobert opened 2 years ago
@Durobert You can either directly use UFLDv1, or you can also both use the row and column anchors as an ensemble learning approach. On the Curvelanes dataset, we use both row and column anchors and aggregate them to produce final predictions.
@cfzd If we have only two lanes, I wonder whether you compare the two methods or not, what's the difference of them?which method is best?
@Durobert I think of course the v2 method should be better.
@cfzd 请问为什么不在culane数据集上采用行,列相加方式(和Curvelanes数据集一样),而采用tusimple数据集上的处理方式,这是出于什么考虑
@578223592 你可以理解为Tusimple和CULane可能相比于Curvelanes更简单一点(它们只有4条线,而Curvelanes最多可能会有10条)。另一个原因是我们希望v2版本和v1在Tusimple和CULane上保持相同的速度。当然你如果在Tusimple和CULane也使用行列一起用的形式的话,性能可能会更好。
@cfzd if name == "main":
torch.backends.cudnn.benchmark = True
args, cfg = merge_config()
cfg.batch_size = 1
print('setting batch_size to 1 for demo generation')
dist_print('start testing...')
assert cfg.backbone in ['18', '34', '50', '101', '152', '50next', '101next', '50wide', '101wide']
if cfg.dataset == 'CULane':
cls_num_per_lane = 18
elif cfg.dataset == 'Tusimple':
cls_num_per_lane = 56
else:
raise NotImplementedError
net = get_model(cfg)
state_dict = torch.load(cfg.test_model, map_location='cpu')['model']
compatible_state_dict = {}
for k, v in state_dict.items():
if 'module.' in k:
compatible_state_dict[k[7:]] = v
else:
compatible_state_dict[k] = v
net.load_state_dict(compatible_state_dict, strict=False)
net.eval()
img_transforms = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize((cfg.train_height, cfg.train_width)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
])
img_path = "D:/autodrive/images/00010.png"
img = cv2.imread(img_path)
img_h, img_w = img.shape[0], img.shape[1]
im0 = img.copy()
img = img_transforms(img)
img = img.to('cuda:0')
img = torch.unsqueeze(img, 0)
with torch.no_grad():
pred = net(img)
coords = pred2coords(pred, cfg.row_anchor, cfg.col_anchor, original_image_width=img_w,
original_image_height=img_h)
for lane in coords:
for coord in lane:
cv2.circle(im0, coord, 5, (0, 255, 0), -1)
cv2.imshow('demo', im0)
cv2.waitKey(0)”
这是我改的单张图片的推理,但是检测出的点与车道线匹配不上,似乎是你在加载图片是进行过仿射变换,可以解答一下吗
@Gannis246 这个是因为我们的预处理里面还包含了一个top_crop的过程,简单来说就是把无用的天空像素给去掉了,你可以参看这部分的代码: https://github.com/cfzd/Ultra-Fast-Lane-Detection-v2/blob/c80276bc2fd67d02579b6eeb57a76cb5a905aa3d/demo.py#L87-L91 https://github.com/cfzd/Ultra-Fast-Lane-Detection-v2/blob/c80276bc2fd67d02579b6eeb57a76cb5a905aa3d/data/dataset.py#L30-L32
您好,如果的图片拍摄角度没有天空,但是下半部分有无用的元素,应该如何处理呢
@cfzd 请问为什么不在culane数据集上采用行,列相加方式(和Curvelanes数据集一样),而采用tusimple数据集上的处理方式,这是出于什么考虑
请问curvelanes数据集的后处理是什么?好像和其他数据集的后处理不一样呢
If we have only two lanes, column anchors can't be used, is it same as the UFLDv1?