LiangliangNan / PolyFit

Polygonal Surface Reconstruction from Point Clouds
https://3d.bk.tudelft.nl/liangliang/publications/2017/polyfit/polyfit.html
GNU General Public License v3.0
723 stars 121 forks source link

Professor, I have a question when i saw the code about the paper, last, computing the binary linear programm, you choose a better scale about the energy terms in below, so i wanna know why do that? #30

Closed zhuhaipeng-byte closed 2 years ago

zhuhaipeng-byte commented 3 years ago

Sorry to trouble you, Professor, I have a question when i saw the code about the paper, last, computing the binary linear programm, you choose a better scale about the energy terms in below, so i wanna know why do that ? i think there must be some reason to do that.

the code: //double coeff_data_fitting = Method::lambda_data_fitting / total_points; //double coeff_coverage = Method::lambda_modelcoverage / model->bbox().area(); //double coeff_complexity = Method::lambda_model_complexity / double(adjacency.size()); // choose a better scale 适当缩放比例 double coeff_data_fitting = Method::lambda_data_fitting; double coeff_coverage = total_points Method::lambda_modelcoverage / model->bbox().area(); double coeff_complexity = total_points Method::lambda_model_complexity / double(adjacency.size());

LiangliangNan commented 3 years ago

They are identical in theory: all three weights are simply multiplied by the total number of points. This won't affect the optimization.

This was introduced to address a potential numerical issue in practice: the weights are in the range [0, 1.0] and the total number of points can be very large. Thus a floating-point number may not have sufficient precision to represent such small numbers (they are very close to zero). Scaling these weights can avoid such a potential numerical issue.