Closed longmalongma closed 4 years ago
@longmalongma Hi, did you make sure that your network can run normally? You can first use simple specific tensor as input to double-check your codes can normally run. See here
@longmalongma Hi, did you make sure that your network can run normally? You can first use simple specific tensor as input to double-check your codes can normally run. See here
Thanks for your reply, I am sure my network can run normally,because I have trained and tested on S3DIS,besides,I have used simple specific tensor as input to double-check my codes,it run well. python train_semantic_kitti_grid1.py,I meet this error:
InvalidArgumentError (see above for traceback): Expected size[2] in [0, 0], but got 6 [[node Slice_1 (defined at /media/longma/demo/PointASNL-master/models/pointasnl_sem_seg_res2.py:35) ]] [[node gradients/layer1/sedge_conv1/Conv2D_grad/tuple/control_dependency_1 (defined at train_semantic_kitti_grid1.py:162) ]]
2020-06-15 10:47:19.144096: W tensorflow/core/kernels/data/generator_dataset_op.cc:79] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated. [[{{node PyFunc}}]]
I think there's something wrong with this function tf.slice to semantickitti dataset,my code is that:
def get_model(point_cloud, is_training, bn_decay=None): """ ConvNet baseline, input is BxNx9 gray image """ batch_size = point_cloud.get_shape()[0].value num_point = point_cloud.get_shape()[1].value end_points = {}
point_cloud_0 = tf.slice(point_cloud, [0, 0, 0], [-1, -1, 3])
l0_net = tf.slice(point_cloud, [0, 0, 3], [-1, -1, 6])
can you help me again? by the way,I do not use input dimension: feature_channel. Looking forward to your reply
@longmalongma Hi, when training on SemanticKITTI, we just use 3-dimension point coordinates as input, which is somehow different with S3DIS (9-dim). You can try directly use input like this rather than use tf.slice(). Good luck!
@longmalongma Hi, when training on SemanticKITTI, we just use 3-dimension point coordinates as input, which is somehow different with S3DIS (9-dim). You can try directly use input like this rather than use tf.slice(). Good luck!
Thank you very much for your help, I have made it. your batch size is 4,why do you set batch size to 12 or big?
@longmalongma Hi, we choose batch size 4 because of GPU limitation, you can definitely choose a larger batch size if you have a lighter model. However, I suggest that you can use more point number per sample rather than large batch number, which improve performance more :)
Now it's time to close this issue. If you have any new questions, please feel free to discuss.
@longmalongma Hi, when training on SemanticKITTI, we just use 3-dimension point coordinates as input, which is somehow different with S3DIS (9-dim). You can try directly use input like this rather than use tf.slice(). Good luck!
Hi, we choose batch size 4 because of GPU limitation, you can definitely choose a larger batch size if you have a lighter model. However, I suggest that you can use more point number per sample rather than large batch number, which improve performance more
Thank you very much,I can set batch size to 12 using one rtx2080.I see that the number of your sample point is that:1280,320,80,40. Do you mean let me increase the number of sample points in my network??? By the way,Do you think the same network sampled on different datasets S3DIS and Semantickitti should have different number of sample points? Because the number of pints in Semantickitti is 10240, and the number of pints in s3dis is 4096.
Can you help me when I run into this error after I replace your network with mine according to your instructions?
File "train_semantic_kitti_grid1.py", line 144, in train bn_decay=bn_decay) File "/home/longma/anaconda2/envs/py36tf/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 442, in iter "Tensor objects are only iterable when eager execution is " TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn.