Open pbias opened 6 years ago
Hi Pierre,
as you can see in the python code there is already a very short example. The method sparse_conv outputs the convolved image and the propagated binary mask. the idea is, that you can now propagate the binary mask to the next convolution by adding more sparse convolutions. The image however needs to contain 0 wherever you dont have any information of the pixel value.
image = tf.placeholder(tf.float32, shape=[None,64,64,2], name="input_image") features,b_mask = sparse_conv(image) features,b_mask = sparse_conv(features,binary_mask=b_mask) #second convolution features,b_mask = sparse_conv(features,binary_mask=b_mask) #third convolution features,b_mask = sparse_conv(features,binary_mask=b_mask) # and so on
Thank you @PeterTor ! The KITTI data is available on the homepage. There is also a devkit which will help you with the data format. If you have any questions about the dataset, please contact me directly.
Hi everyone,
Thank you for all the details. I think I get how to use the sparse_conv code. Now if I want to replicate the training of the network presented in the paper you mentionned ("Sparsity Invariant CNNs." (2017)), I'm not quite sure to do it correctly, specially once the network is trained, I'm not comfortable with how to densify a depth map. Perhaps one of you would feel like helping me with this by giving me some hints? That would be wonderful !
Anyway, I thank you very much for your answers.
Pierre.
Good Afternoon,
I'm would be very interested in trying this code on the KITTI dataset that comes with the paper on which this code is based, but unfortunetely I'm not quite comfortable with the way to use it.
Would it be possible for you to provide an example code? Or at least guidelines to fullfil this goal?
I thank you in advance for your concern,
Pierre