Closed daeunni closed 1 year ago
@daeunni I suppose you want to change 288 to 360? You'll have to change all 288 related things, including data processing (L5-6) and the flatten size (L65) for segmentation based methods.
train & test settings as well (L27 L50)
Yes! I want to CULane model's current default input size input_size=(288, 800)
to input_size=(360, 640)
(same as TUSimple model's)
Okay thank you for your quick reply. Is there any problem for model or performance after changing input size ?
Okay thank you for your quick reply. Is there any problem for model or performance after changing input size ?
That wasn't tested properly, though I suspect the influence should be small. The main problem comes with the change of aspect ratio.
Oh thank you for your quick reply :))
however, if I change CULane's input size (288, 800) to (360, 640), I think the aspect ratio is changed 1 : 2.77
(=800/288) to 1 : 1.77
(=640/360). Is it okay? :0
I'm new to Lane Detection, so if my thought was wrong about aspect ratio, plz tell me.
Yes the difference between original image (590/1640) & training size (360/640) could bring unnatural distortions to the image contents. Ideally, the aspect ratio should remain similar to original.
Oh okay I understand :) thank you for your kind reply! I'll ask another questions if I can't solve this problem :D
Hi, thank you for your impressive work ! :D
btw, I'd like to unify model's input sizes of TUSimple and CULane dataset . and I thought it can if I modify below part in the cfg file.
https://github.com/voldemortX/pytorch-auto-drive/blob/2b0d5ec5f7536c9d2d2b6d8498718a8fca2ab276/configs/lane_detection/baseline/erfnet_culane.py#L27
however when I revise the input size in config file, it raise some error like that.
RuntimeError: input and target batch or spatial sizes don't match: target [20, 288, 800], input [20, 5, 360, 640]
How can I solve that? Is there anyone who know how to unify TUSimple & CULane model's input size? Best,