Closed Shubhendu-Jena closed 4 years ago
Hi, you can try this by setting all RGB values to (0,0,0) and can still get similar results. This means our method does not rely on RGB values.
Hi, thanks a lot for the reply. Just another doubt, in the function sample_and_group, if the flag use_xyz is set to true, we have : new_points = tf.concat([grouped_xyz, grouped_points], axis=-1) # (batch_size, npoint, nample, 3+channel). To use XYZ values only, instead of setting (R,G,B) = (0,0,0), I was instead doing new_points = grouped_xyz which gives me significantly different results. I can't figure out why this creates that much of a difference as I was grouping X,Y,Z coordinates without the concatenated (R,G,B) = (0,0,0) values. Can you please help me with this?
Dear Sir,
In the paper, it is mentioned that "Our work focuses on learning scene flow directly from point clouds, without any dependence on RGB images or assumptions on rigidity and camera motions". However, in the code for training using flying things 3D, you are clearly using RGB color information of point clouds to construct feature maps. Isn't that the same as using RGB images? Please correct me if I'm wrong.
Best Regards,