OpenDriveLab / OpenLane

[ECCV 2022 Oral] OpenLane: Large-scale Realistic 3D Lane Dataset
Apache License 2.0
492 stars 46 forks source link

Possible unreasonable evaluation methods #54

Closed qiaozhijian closed 1 year ago

qiaozhijian commented 1 year ago

If I force prediction to be equal to groud truth, I can't get a F-score of 1.

I add these code before "self.bench" function

pred_lanes = gt_lanes
pred_category = gt_category

However, I get the following result for night case.

===> Evaluation on validation set:
laneline F-measure 0.84169872 laneline Recall 0.99912995 laneline Precision 0.72712661 laneline Category Accuracy 0.99956497 laneline x error (close) 0.0017959209 m laneline x error (far) 0.003160552 m laneline z error (close) 0.00084101858 m laneline z error (far) 0.0027049253 m

RicardLee commented 1 year ago

Hi, thanks for raising this question. We will reproduce this asap.

RicardLee commented 1 year ago

Hi, use deepcopy() instead of =.

qiaozhijian commented 1 year ago

OK, thanks.