qiqihaer / RandLA-Net-pytorch

RandLA-Net's implementation with Pytorch
MIT License
128 stars 20 forks source link

How to utilize the information of intensity? #1

Closed vernon97 closed 4 years ago

vernon97 commented 4 years ago

Hi!

Thank you for your nice implementation of RandLA-Net in Pytorch.

I found that the function : load_pc_kitti only makes use of x,y and z of each velodyne scan while ignoring the intensity. To my way of thinking, the information of intensity plays an important role in the task of semantic segmention on KITTI. So I plan to modify it based on your code. I notice that in data preparing stage there is building kdtree in x,y and z. I don't know what to do next if adding the intensity on it. Could you please give me some advice on it? I would appreciate it very much!

This is to thank you again for your kindness!

qiqihaer commented 4 years ago
  1. Modify the code in load_pc_kitti to keep the intensity information
  2. Modify the code in line 44 in data_prepare_semantickitti.py. Make the intensity information as the feature input of grid_sub_sampling. Then you can get the sampled intenstiy and save it.
  3. Load the intensity in SemanticKITTI and modify the input layer of the network
vernon97 commented 4 years ago

好的 我去改一下 感谢你的回复23333

vernon97 commented 4 years ago

你好~ 很抱歉又打扰了 在semantic_kitti_dataset.py中

    def spatially_regular_gen(self, item):
        # Generator loop

        if self.mode != 'test':
            cloud_ind = item
            pc_path = self.data_list[cloud_ind]
            pc, tree, labels = self.get_data(pc_path)
            # crop a small point cloud
            pick_idx = np.random.choice(len(pc), 1)
            selected_pc, selected_labels, selected_idx = self.crop_pc(pc, labels, tree, pick_idx)
        else:
            cloud_ind = int(np.argmin(self.min_possibility))
            pick_idx = np.argmin(self.possibility[cloud_ind])
            pc_path = self.data_list[cloud_ind]
            pc, tree, labels = self.get_data(pc_path)
            selected_pc, selected_labels, selected_idx = self.crop_pc(pc, labels, tree, pick_idx)

            # update the possibility of the selected pc
            dists = np.sum(np.square((selected_pc - pc[pick_idx]).astype(np.float32)), axis=1)
            delta = np.square(1 - dists / np.max(dists))
            self.possibility[cloud_ind][selected_idx] += delta
            self.min_possibility[cloud_ind] = np.min(self.possibility[cloud_ind])

cloud_ind = int(np.argmin(self.min_possibility)) 这里初始化的时候self.min_possibility是空列表,直接求最小值会报错,看了代码不太清楚该怎么处理这个问题呢?

qiqihaer commented 4 years ago

I forgot the initialization of possibility and min_possibility in test mode. I have fixed this problem. Please re-clone the repository.

vernon97 commented 4 years ago

OK~ Thank you for your reply~

qiqihaer commented 4 years ago

OK~ Thank you for your reply~

你是浙大的?哪个学院的啊?

vernon97 commented 4 years ago

我是控制学院研一的,你也是浙大的吗?! 加个微信吧(笑

vernon97 commented 4 years ago

wechat:vernon97

qiqihaer commented 4 years ago

wechat:vernon97

我也是控制学院的,巧了巧了,但你留的微信似乎不太对啊。搜到的是“xxxx自动引流技术”

vernon97 commented 4 years ago

啊哈哈哈哈 我居然写错了 是这个 vernon970802

vernon97 commented 4 years ago

真的很巧 233333

JerryIndus commented 3 years ago

I forgot the initialization of possibility and min_possibility in test mode. I have fixed this problem. Please re-clone the repository.

您好,我用自己的数据集复现您的方法,然后有两个问题,想请教您一下:

  1. 我基于main_SemanticKITTI.py 写了自己的main函数,然后发现经过dataloader得到的batch size始终为1,而不是通过FLAGS.batch_size得到的20。dataloader我也是基于semantic_kitti_dataset.py做的修改,主要修改的内容是增加了intensity信息,和路径信息,其中数据格式是按照您的数据格式制作的。然后就想问一下您,您在运行时有没有遇到batch size和预期给定的batch size不一样的情况。
  2. 我看您的semantic_kitti_dataset.py中没有用到proj.pkl文件,在您在做验证和测试时,怎么得到和原始输入数据一样大小的输出结果呢?
caoyifeng001 commented 3 years ago

添加intensity 信息对精度影响大吗