jxqhhh / PytorchPointCNN

Apache License 2.0
10 stars 1 forks source link

Classification accuracy issue on ModelNet40 #3

Closed Ma-Weijian closed 2 years ago

Ma-Weijian commented 2 years ago

Hi, there.

Wonderful pytorch implementation on PointCNN. I tried to run the training code of Modelnet40 and got a best test acc about 91.69%. This is similar to the accuracy in the first version of the official tensorflow code.

I wonder what the test accuracy is when you were training ModelNet40 classification. It seems that, from 91.7% acc to currently reported 92.2% or 92.5%, such .5% improvement is based on some tuning tricks in the official tensorflow code. I wonder whether such tricks are applied here. If not, could you please give me some pointers on where to implement these tricks. As I'm a newbie in this area an not familiar with tensorflow, this will be of great help.

A million thanks.

jxqhhh commented 2 years ago

Hello.

Sorry that I have turned to other research directions these days, so I am afraid that I cannot maintain my pytorch implementation.

I also noticed that the accuracy was a little lower than the official implementation previously. It may be due to that the detailed structure of my implementation differs from the official implementation slightly. I am not sure whether this difference really exists as several years have passed since my working on the implementation. Maybe you could verify this by checking if my implementation shares the exactly same amount of parameters with the official one( or just checking the code of each module manually).

------------------ 原始邮件 ------------------ 发件人: "jxqhhh/PytorchPointCNN" @.>; 发送时间: 2022年9月21日(星期三) 上午10:42 @.>; @.***>; 主题: [jxqhhh/PytorchPointCNN] Classification accuracy issue on ModelNet40 (Issue #3)

Hi, there.

Wonderful pytorch implementation on PointCNN. I tried to run the training code of Modelnet40 and got a best test acc about 91.69%. This is similar to the accuracy in the first version of the official tensorflow code.

I wonder what the test accuracy is when you were training ModelNet40 classification. It seems that, from 91.7% acc to currently reported 92.2% or 92.5%, such .5% improvement is based on some tuning tricks in the official tensorflow code. I wonder whether such tricks are applied here. If not, could you please give me some pointers on where to implement these tricks. As I'm a newbie in this area an not familiar with tensorflow, this will be of great help.

A million thanks.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>

Ma-Weijian commented 2 years ago

It is mentioned by the authors of the official code that the imporvement is mainly achieved by tuning the MLP implementation of X-transfrom. Such alternation is believed to be implemented before Mar 2019.

@rruixxu 91.7% in an earlier version is on un-aligned setting. The improvement is mainly due to the implementation in the MLP implementation (normalization, fully connected -> col/row connected, etc.) of X-transformation.

Could you remember whether your implementation of X-transformation is the same as the original one?

Thanks a lot!

Ma-Weijian commented 2 years ago

Well, it seems that the config param 'with_global' should also be set to True according to the original tf implementation. The classification result (92.3%) after the modification is now on par with those reported in the original paper.