Closed SunshineDou closed 5 years ago
Given your questions are quite basic, I strongly encourage you to go through the demo. For your second concern, the short answer is NO, since they output normal in different formats - one use 16bit png and the other use hdf5.
@yindaz Thank you very much for your replying. I have got it now, as I am not very familiar with this before and I have not looked at the demo_realsense.m carefully。 I study the "demo_realsense.m" and the project carefully today, and I have got it. Thank you very much sincerely
Hello, I want to finish the depth completion of my own depth images. as I am new to this field. according to my understanding, I should get the estimated surface normal image, boundary image and occlusion weight image to generate the completion depth image, how to get the occlusion and surface normal? as I run the "main_test_bound_realsense.lua" to generate the boundary, the command as follows: th main_test_bound_realsense.lua but it failed, what's the matter? should I add some parameters? I have also run the https://github.com/yindaz/surface_normal, I have a doubt that if I generate the surface normals using it, can I use the generated surface normals images straight to the deep completion as it use different model? Thanks very much