Closed ajay1606 closed 2 years ago
You can refer this issue: https://github.com/Turoad/lanedet/issues/49. Try to change the `sample_y'. It can be 'sample_y = range(ori_img_h - 1, cut_height, -20)'.
I will update the config later.
@Turoad Thanks for your kind response,
# sample_y = range(589, 230, -20)
sample_y = range(1207, 240, -20)
img_height = 1208
img_width = 1920
cut_height =240
ori_img_h = 1208
ori_img_w = 1920
With the above modifications to resa50_culane.py result in the following error!
Traceback (most recent call last):
File "tools/detect.py", line 87, in <module>
process(args)
File "tools/detect.py", line 74, in process
detect = Detect(cfg)
File "tools/detect.py", line 24, in __init__
load_network(self.net, self.cfg.load_from)
File "/home/ajay/lanedet/lanedet/utils/net_utils.py", line 48, in load_network
net.load_state_dict(pretrained_model['net'], strict=True)
File "/home/ajay/miniconda3/envs/lanedet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1223, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for DataParallel:
size mismatch for module.heads.exist.fc9.weight: copying a param with shape torch.Size([128, 4500]) from checkpoint, the shape in current model is torch.Size([128, 45300]).
Could you please help me out this?
@Turoad Any hints please!
@Turoad Refering this issue: https://github.com/Turoad/lanedet/issues/49. I have tried with condlane with tuning few parameters as it mentioned.
# sample_y = range(590, 270, -8)
sample_y = range(1208, 302, -8)
#batch_size = 8
batch_size = 1
# pos_shape=(batch_size, 10, 25),
pos_shape=(batch_size, 38, 25),
#location_configs=dict(size=(batch_size, 1, 80, 200), device='cuda:0')
location_configs=dict(size=(batch_size, 1, 302, 200), device='cuda:0')
# img_height = 320
# img_width = 800
# cut_height = 0
# ori_img_h = 590
# ori_img_w = 1640
img_height = 1208
img_width = 800
cut_height = 0
ori_img_h = 1208
ori_img_w = 1920
# img_scale = (800, 320)
# crop_bbox = [0, 270, 1640, 590]
# mask_size = (1, 80, 200)
img_scale = (800, 1208)
crop_bbox = [0, 270, 1920, 1208]
mask_size = (1, 302, 200)
But obtained results like following:
Would you please let me know is there any params that need to be tuned more ! Appreciate your response .
Regards, Ajay
I am confused which config you're using.
Maybe you can try to resize your image to (1640, 590)
to keep the same shape as RESA and test.
@Turoad Above results were from condlane config, as i mentioned it in my previous comment I have followed that by Referring this issue: https://github.com/Turoad/lanedet/issues/49.
And thanks for your input, with keeping the image size (1640, 590) am able to see detection working well. But one small query, number of lane detected from condlane and resa model is quite different, is it common ?. By comparison condlane appears to be more consistent and we are looking to continue further with condlane instead resa.
Thank you so much for your kind input always! Appreciate it.
Regards, Ajay
I will close this issue first. Feel free to open new issue if you have other question.
Hello,
I have tried testing with the CULane dataset with rsea and it is working well with the example video_example/05081544_0305/
With the following image configuration: img_height = 288 img_width = 800 cut_height = 240 ori_img_h = 590 ori_img_w = 1640
But with custom image of configurations: img_height = 288 img_width = 800 cut_height = 240 ori_img_h = 1208 // 590 ori_img_w = 1920 //1640
With above parameters: custom image
With defaut parameters: custom image
Could you please assist me which params needs to be tuned.
Appreciate any response.
Regards, Ajay