Closed brian220 closed 3 years ago
Hi @brian220. Nice observation. I can just have an educated guess for this. Probably the deficiency comes from this, which is a heuristic initiation. This initial point cloud is to ensure that when collapse z
, it will cover the whole image plane, which is x-y
.
Then, theoretically, you should obtain the same result if initializing from isotropic shapes like sphere. Actually I tried it before but it gave poorer results, but if you need something robust to orientation, it should be the one to go to.
Code bonus:
def sample_spherical(n_points):
vec = np.random.rand(n_points, 3) * 2. - 1.
vec /= np.linalg.norm(vec, axis=1, keepdims=True)
pc = vec * .3 + np.array([[6.462339e-04, 9.615256e-04, -7.909229e-01]])
return pc.astype('float32')
Hi, @justanhduc
Thanks for your reply, which let me have a more profound understanding of the network.
I will try with different initial to check the difference.
Thank you very much!
Hi, thanks for the nice implementation.
I have trained your model on my own images and point clouds, And I found that the reconstruction results is different when I use different point clouds dataset. For example, when I use the dataset which the chair point clouds stand on the x-y plane will bring the better results (more details) then which stand on y-z plane. It seems that the orientation of ground truth point clouds may influenced the reconstruction results. Should I adjust the code according to different orientation of ground truth data? It would be very helpful if you could give me some suggestions.
Thank you very much!