Ivan-Tang-3D / Point-PEFT

(AAAI2024) Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models
45 stars 3 forks source link

Difference between code and paper #1

Closed 123456789asdfjkl closed 7 months ago

123456789asdfjkl commented 8 months ago

您好!非常感谢您的杰出工作!您论文中前L个Stage使用的Point-prior Prompt, 后(N-L)Stage不使用Prompt,但是看您的代码实现Point_Mask_Rev_FT_scan.py,Prompt是始终使用的,在后几个Stage加入的Point-prior Prompt。好像与论文说的不一样,想问您一下我代码理解的是对的吗?

Ivan-Tang-3D commented 8 months ago

您好!非常感谢您的杰出工作!您论文中前L个Stage使用的Point-prior Prompt, 后(N-L)Stage不使用Prompt,但是看您的代码实现Point_Mask_Rev_FT_scan.py,Prompt是始终使用的,在后几个Stage加入的Point-prior Prompt。好像与论文说的不一样,想问您一下我代码理解的是对的吗?

The L is different for models and Downstream Tasks. In ScanObjectNN, for Point-M2AE, L is 12. Please Refer to the 4.1.1 and 4.1.2 sections in my paper.

123456789asdfjkl commented 8 months ago

我知道L针对不同的数据集和任务时改变的,我想问您的是代码里Prompt是一直有的,Point-prior Prompt是插入到最后一个Stage中的,好像跟论文的图2不一样。图2是在前L个层插入Point-prior Prompt,后面的层不使用Prompt

image

Ivan-Tang-3D commented 8 months ago

我知道L针对不同的数据集和任务时改变的,我想问您的是代码里Prompt是一直有的,Point-prior Prompt是插入到最后一个Stage中的,好像跟论文的图2不一样。图2是在前L个层插入Point-prior Prompt,后面的层不使用Prompt image

I'm sorry, the paper is not clear about this aspect. L is targeted for Prompt tokens. The Point-Prior Prompt is only applied in the last stage in Point-M2AE. So the L is 12 for Point-M2AE, which ensures the point-prior prompt for last stage. This derives the faster training and fewer parameters

123456789asdfjkl commented 8 months ago

我明白了,感谢您的解答 !@Ivan-Tang-3D

Ivan-Tang-3D commented 8 months ago

我明白了,感谢您的解答 !@Ivan-Tang-3D

You are Welcome.