HuguesTHOMAS / KPConv-PyTorch

Kernel Point Convolution implemented in PyTorch
MIT License
778 stars 155 forks source link

How to choose the radius size when I need to look for neighbours of each point #5

Closed QLuanWilliamed closed 4 years ago

QLuanWilliamed commented 4 years ago

Hi, I'm replicating your code, but using a different data set. I want to know how I can set the search radius to find neighbours points. In your paper, you just said: where σ is the influence distance of the kernel points, and will be chosen according to the input density, whereas in your code (config.py) you set some default values for the radius and density parameters, I wanna know if there was any reference when I'm going to set the search radius?

HuguesTHOMAS commented 4 years ago

Hi @Luan-Zhaoliang,

the parameters you are interested in are defined here: https://github.com/HuguesTHOMAS/KPConv-PyTorch/blob/ccb820bbb7a18e4642c317bc8a80d26164f7f3c1/train_S3DIS.py#L102-L112

I hope this helps.

QLuanWilliamed commented 4 years ago

Hi Thomas,

Very thanks for your reply. It is very helpful for me.

regards, Zhaoliang


发件人: Hugues THOMAS notifications@github.com 发送时间: 2020年5月11日 16:33 收件人: HuguesTHOMAS/KPConv-PyTorch KPConv-PyTorch@noreply.github.com 抄送: Zhaoliang Luan z.luan@qmul.ac.uk; Mention mention@noreply.github.com 主题: Re: [HuguesTHOMAS/KPConv-PyTorch] How to choose the radius size when I need to look for neighbours of each point (#5)

Hi @Luan-Zhaolianghttps://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FLuan-Zhaoliang&data=02%7C01%7C%7Ca1df63edaca64c1d3c7108d7f5c09956%7C569df091b01340e386eebd9cb9e25814%7C0%7C1%7C637248079864448982&sdata=ZxoPcrBVirSomSYDX%2BSE2YnU95qLzqNjqKqJhvXee7U%3D&reserved=0,

the parameters you are interested in are defined here: https://github.com/HuguesTHOMAS/KPConv-PyTorch/blob/ccb820bbb7a18e4642c317bc8a80d26164f7f3c1/train_S3DIS.py#L102-L112https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FHuguesTHOMAS%2FKPConv-PyTorch%2Fblob%2Fccb820bbb7a18e4642c317bc8a80d26164f7f3c1%2Ftrain_S3DIS.py%23L102-L112&data=02%7C01%7C%7Ca1df63edaca64c1d3c7108d7f5c09956%7C569df091b01340e386eebd9cb9e25814%7C0%7C1%7C637248079864448982&sdata=SXeE79yEHabFpIgCM92FNXIQcgdgAE%2Fc1eI83%2FXnmgQ%3D&reserved=0

I hope this helps.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FHuguesTHOMAS%2FKPConv-PyTorch%2Fissues%2F5%23issuecomment-626778035&data=02%7C01%7C%7Ca1df63edaca64c1d3c7108d7f5c09956%7C569df091b01340e386eebd9cb9e25814%7C0%7C1%7C637248079864458977&sdata=RWMRzJCyOHeuGGd3PyhGFc4EuIMKbPPaM0%2FI5GIuZZM%3D&reserved=0, or unsubscribehttps://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAPBUPY32UAEX77U3NHCAGSLRRALC7ANCNFSM4M52W4KA&data=02%7C01%7C%7Ca1df63edaca64c1d3c7108d7f5c09956%7C569df091b01340e386eebd9cb9e25814%7C0%7C1%7C637248079864458977&sdata=xmJAz8nI%2BdRrLecdxO0DFoHcOfNOpe5nNemzFVvNgbM%3D&reserved=0.

jedolb commented 4 years ago

Hi, thank you for sharing this great work ! I have a question related to what you said in this issue. If I have a denser dataset, in which direction should I change these parameters? Should I increase or decrease their values?

HuguesTHOMAS commented 4 years ago

Hi @jedolb,

It depends on what kind of objects you are trying to detect. Having a denser dataset allows you to reduce the first_subsampling_dl, and therefore have shapes with finer details. To avoid having your memory explode, you also want to reduce the in_radius (which was not listed above and controls the size of the input sphere) parameter accordingly. In that case your network will be better on small objects but could lose performances on very big objects.

If your dataset is denser but still have big objects, then maybe this density is not useful to you and you can keep a higher first_subsampling_dl and in_radius.

In any case, the other parameters should remain the same as a change of dataset density will not affect them.

jedolb commented 4 years ago

Thank you very much for your quick answer and your explanations !

Actually, I'm trying to train a KPConv network on accumulated Lidar data. I accumulated the sequences of the SemanticKitti dataset, and I've cut out sections of 20m2. So I feed the network with these sections. One section looks like this : acc4_11

I made a first training with the parameters of train_SemanticKitti.py . The results are not that bad, the validation accuracy is 0,8. Here a prediction : acc4_11_pred

Some classes are well detected like cars or buildings. But he can't tell the difference between road, sidewalk and terrain. I guess I can increase performance. I've big objects, but I may have enough detail to reduce first_subsampling_dl and in_radius. There's no harm in trying :)

HuguesTHOMAS commented 4 years ago

Exactly, for this kind of application where you are not sure which values are the best, you can just try a bunch of them.

I would advise to start from the parameters of NPM3D dataset which is a similar situation. and then experiment to finetune the values. In any case, road, sidewalk and terrain are very challenging classes when you don't have colors, as they have nearly the same geometric shape. You could try to use the lidar intensity value but I think this is not going to help.

Good luck with your problem.

jedolb commented 4 years ago

I will first try with the parameters of NPM3D then

Thanks again for your help! It is very helpful for me