dreamquark-ai / tabnet

PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf
https://dreamquark-ai.github.io/tabnet/
MIT License
2.65k stars 488 forks source link

Lightweight Fine-tunning or few-shot learning for limited labeled data #536

Open Septimus2024 opened 9 months ago

Septimus2024 commented 9 months ago

Feature request

After semi-supervised pretraining, can we do light-weighted fine-tunning or few-shot learning instead of classification?

What is the expected behavior? Instead of fine-tuning on decent amount of labeled data. Is it possible to do some light-weight fine-tuning (e.g., fine-tunning on less than 100 labeled data) or doing few-shot learning instead of classification?

What is motivation or use case for adding/changing the behavior? Only have limited labeled data to fine-tune model.

How should this be implemented in your opinion? For few-shot learning, maybe change the loss function.

Are you willing to work on this yourself? yes

Optimox commented 8 months ago

feel free to open a PR with a concrete proposition.