loicland / superpoint_graph

Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs
MIT License
758 stars 214 forks source link

custom_data without label has a problem in partition.py in libply.prune function #242

Closed ruomingzhai closed 3 years ago

ruomingzhai commented 3 years ago

I add a Area_7 in S3DIS datasets for my own data ,and I also add a few code to distinguish the data without labels from the one with label ,like the following(in partition.py):

            if args.voxel_width > 0:
                if not folder == "Area_7":
                    xyz, rgb, labels, dump = libply_c.prune(xyz.astype('f4'), args.voxel_width, rgb.astype('uint8'), labels.astype('uint8'), np.zeros(1, dtype='uint8'), n_labels, 0) 
                else :
                    xyz, rgb, labels = libply_c.prune(xyz, args.voxel_width, rgb, np.array(1,dtype='u1'), 0)

But it went to a bug like : Boost.Python.ArgumentError: Python argument types in partition.ply_c.libply_c.prune(numpy.ndarray, float, numpy.ndarray, numpy.ndarray, int) did not match C++ signature: prune(boost::python::numpy::ndarray, float, boost::python::numpy::ndarray, boost::python::numpy::ndarray, boost::python::numpy::ndarray, int, int)

It seems to me that I have to input 7 arguments while from your suggestion I can only input 5 arguments when no label information is present. Look forwads to getting some clues from you.Thank you.

loicland commented 3 years ago

Sorry just saw this. Did you figure it out? You need to repalce the labels by np.zeros(1, dtype='uint8'). See partition/provider.py/read_semantic3d_format for an exammle of dealing with data with and without labels.

ruomingzhai commented 3 years ago

Sorry just saw this. Did you figure it out? You need to repalce the labels by np.zeros(1, dtype='uint8'). See partition/provider.py/read_semantic3d_format for an exammle of dealing with data with and without labels.

thx!!! I launch another topic about training problem: your paper says the total number of super points in each batchsize was Subsample to 512 (derived from --args.hardcutoff).I am not sure it was before embedding or after the embedding(i.e. in graphconv).Because I print the size of embedding output in batchsize =2,it is Tensor.size(1174,32),a little bit larger than 512*2=1024.Can you explain and point out to me where exactly is the subsample code.It bothers me long time!Thx!