jerryfeng2003 / PointGST

Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning
https://arxiv.org/abs/2410.08114
Apache License 2.0
117 stars 11 forks source link
3d-point-clouds efficient-deep-learning point-cloud

Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning🚀

[Dingkang Liang](https://dk-liang.github.io/)1\* ,[Tianrui Feng](https://github.com/jerryfeng2003)1\* ,[Xin Zhou](https://lmd0311.github.io/)1\* , Yumeng Zhang2, [Zhikang Zou](https://bigteacher-777.github.io/)2, and [Xiang Bai](https://scholar.google.com/citations?user=UeltiQ4AAAAJ&hl=en) 1✉️ 1 Huazhong University of Science and Technology, 2 Baidu Inc. (*) equal contribution, (​✉️​) corresponding author. [![arXiv](https://img.shields.io/badge/Arxiv-2410.08114-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2410.08114) [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/parameter-efficient-fine-tuning-in-spectral/3d-point-cloud-classification-on-scanobjectnn)](https://paperswithcode.com/sota/3d-point-cloud-classification-on-scanobjectnn?p=parameter-efficient-fine-tuning-in-spectral) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/parameter-efficient-fine-tuning-in-spectral/3d-parameter-efficient-fine-tuning-for)](https://paperswithcode.com/sota/3d-parameter-efficient-fine-tuning-for?p=parameter-efficient-fine-tuning-in-spectral) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/parameter-efficient-fine-tuning-in-spectral/3d-parameter-efficient-fine-tuning-for-1)](https://paperswithcode.com/sota/3d-parameter-efficient-fine-tuning-for-1?p=parameter-efficient-fine-tuning-in-spectral) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/parameter-efficient-fine-tuning-in-spectral/3d-point-cloud-classification-on-modelnet40)](https://paperswithcode.com/sota/3d-point-cloud-classification-on-modelnet40?p=parameter-efficient-fine-tuning-in-spectral)

News

[2024-10-10] PointGST is released.

Abstract

Recently, leveraging pre-training techniques to enhance point cloud models has become a hot research topic. However, existing approaches typically require full fine-tuning of pre-trained models to achieve satisfied performance on downstream tasks, accompanying storage-intensive and computationally demanding. To address this issue, we propose a novel Parameter-Efficient Fine-Tuning (PEFT) method for point cloud, called PointGST (Point cloud Graph Spectral Tuning). PointGST freezes the pre-trained model and introduces a lightweight, trainable Point Cloud Spectral Adapter (PCSA) to fine-tune parameters in the spectral domain.

Extensive experiments on challenging point cloud datasets across various tasks demonstrate that PointGST not only outperforms its fully fine-tuning counterpart but also significantly reduces trainable parameters, making it a promising solution for efficient point cloud learning. More importantly, it improves upon a solid baseline by +2.28\%, 1.16\%, and 2.78\%, resulting in 99.48\%, 97.76\%, and 96.18\% on the ScanObjNN OBJ_BG, OBJ_OBLY, and PB_T50_RS datasets, respectively. This advancement establishes a new state-of-the-art, using only 0.67\% of the trainable parameters.

Overview

Getting Started

Installation

We recommend using Anaconda for the installation process:

git clone https://github.com/jerryfeng2003/PointGST.git
cd PointGST/

Requirements

conda create -y -n pgst python=3.9
conda activate pgst
pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

# Chamfer Distance & emd
cd ./extensions/chamfer_dist
python setup.py install --user
cd ../emd
python setup.py install --user

# PointNet++
pip install "git+https://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops&subdirectory=pointnet2_ops_lib"

# GPU kNN
pip install --upgrade https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl

Datasets

See DATASET.md for details.

Main Results

Baseline Trainable Parameters Dataset Config Acc. Download
Point-MAE
(ECCV 22)
0.6M ModelNet40
OBJ_BG
OBJ_ONLY
PB_T50_RS
modelnet
scan_objbg
scan_objonly
scan_hardest
93.5
91.74
90.19
85.29
ckpt
ckpt
ckpt
ckpt
ACT
(ICLR 23)
0.6M ModelNet40
OBJ_BG
OBJ_ONLY
PB_T50_RS
modelnet
scan_objbg
scan_objonly
scan_hardest
93.4
93.46
92.60
88.27
ckpt
ckpt
ckpt
ckpt
ReCon
(ICML 23)
0.6M ModelNet40
OBJ_BG
OBJ_ONLY
PB_T50_RS
modelnet
scan_objbg
scan_objonly
scan_hardest
93.6
94.49
92.94
89.49
ckpt
ckpt
ckpt
ckpt
PointGPT-L
(NeurIPS 24)
2.4M ModelNet40
OBJ_BG
OBJ_ONLY
PB_T50_RS
modelnet
scan_objbg
scan_objonly
scan_hardest
94.8
98.97
97.59
94.83
ckpt
ckpt
ckpt
ckpt
PointGPT-L (voting)
(NeurIPS 24)
2.4M ModelNet40
OBJ_BG
OBJ_ONLY
PB_T50_RS
modelnet
scan_objbg
scan_objonly
scan_hardest
95.3
99.48
97.76
96.18
log
log
log
log

The evaluation commands with checkpoints should be in the following format:

CUDA_VISIBLE_DEVICES=<GPU> python main.py --test --config <path/to/cfg> --exp_name <path/to/output> --ckpts <path/to/ckpt>

# further enable voting mechanism
CUDA_VISIBLE_DEVICES=<GPU> python main.py --test --vote --config <path/to/cfg> --exp_name <path/to/output> --ckpts <path/to/ckpt>

All the experiments are conducted on a single NVIDIA 3090 GPU.

t-SNE visualization

# t-SNE on ScanObjectNN
CUDA_VISIBLE_DEVICES=<GPU> python main.py --config <path/to/cfg> --ckpts <path/to/ckpt> --tsne --exp_name <path/to/output>

To Do

Acknowledgement

This project is based on Point-BERT (paper, code), Point-MAE (paper, code), ACT(paper, code), ReCon (paper, code), PointGPT(paper, code), IDPT (paper, code), and DAPT(paper, code). Thanks for their wonderful works.

Citation

If you find this repository useful in your research, please consider giving a star ⭐ and a citation.

@article{liang2024pointgst,
  title={Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning},
  author={Liang, Dingkang and Feng, Tianrui and Zhou, Xin and Zhang, Yumeng and Zou, Zhikang and Bai, Xiang},
  journal={arXiv preprint arXiv:2410.08114},
  year={2024}
}