Closed ruomingzhai closed 3 years ago
Sorry just saw this. Did you figure it out? You need to repalce the labels by np.zeros(1, dtype='uint8')
. See partition/provider.py/read_semantic3d_format
for an exammle of dealing with data with and without labels.
Sorry just saw this. Did you figure it out? You need to repalce the labels by
np.zeros(1, dtype='uint8')
. Seepartition/provider.py/read_semantic3d_format
for an exammle of dealing with data with and without labels.
thx!!! I launch another topic about training problem: your paper says the total number of super points in each batchsize was Subsample to 512 (derived from --args.hardcutoff).I am not sure it was before embedding or after the embedding(i.e. in graphconv).Because I print the size of embedding output in batchsize =2,it is Tensor.size(1174,32),a little bit larger than 512*2=1024.Can you explain and point out to me where exactly is the subsample code.It bothers me long time!Thx!
I add a Area_7 in S3DIS datasets for my own data ,and I also add a few code to distinguish the data without labels from the one with label ,like the following(in partition.py):
But it went to a bug like : Boost.Python.ArgumentError: Python argument types in partition.ply_c.libply_c.prune(numpy.ndarray, float, numpy.ndarray, numpy.ndarray, int) did not match C++ signature: prune(boost::python::numpy::ndarray, float, boost::python::numpy::ndarray, boost::python::numpy::ndarray, boost::python::numpy::ndarray, int, int)
It seems to me that I have to input 7 arguments while from your suggestion I can only input 5 arguments when no label information is present. Look forwads to getting some clues from you.Thank you.