Open gottfriedm opened 9 months ago
Hi, sorry for late reply. Just saw new issues.
Multiplying by n_z is a numerical trick we find to perform quite well when designing the weight functions. (which is also true for the data term, i.e., the depth normal PDEs (3) and (8))
In the python code, the scaling by n_z is conducted when constructing the four matrices A1~A4. https://github.com/xucao-42/bilateral_normal_integration/blob/8c6aa33161943ec729643a2044b58118cdf1b3b7/bilateral_normal_integration_numpy.py#L62
In equation 17 of the paper the depth differences are scaled by the normal vector z component. The paper reads:
I wondered why this should make sense, and alas, reading the code i do not find that the scaling with nz is used. In the python numpy script line 290 and 291, the weights are constructed as
wu = sigmoid((A2 @ z) ** 2 - (A1 @ z) ** 2, k)
wv = sigmoid((A4 @ z) ** 2 - (A3 @ z) ** 2, k)
, ie directly applying the sigmoid to the difference of the squared one-sided derivatives, no scaling by nz.Do i miss something?