ldkong1205 / LaserMix

[CVPR 2023 Highlight] LaserMix for Semi-Supervised LiDAR Semantic Segmentation
https://ldkong.com/LaserMix
Apache License 2.0
272 stars 17 forks source link

Data Split & Training Epoch #3

Closed FrontierBreaker closed 1 year ago

ldkong1205 commented 1 year ago

Hi @FrontierBreaker, thanks for asking!

For your question: we randomly sample x% scans from the whole dataset and assume they are labeled; while the remaining ones are assumed as unlabeled. The data splits (token names) for nuScenes scans have been included under script/split/nuscenes/. As you might have noticed, we have released the initial code of this codebase. More code will be available around mid-December. Please let me know if you need the generation script for random sampling. Thanks~

FrontierBreaker commented 1 year ago

Yeah, I've just noticed the split for the dataset. Thanks for your timely reply! : )

FrontierBreaker commented 1 year ago

By the way, I have another question. That is, for the experiments on low-annotation training setting, what's the typical protocol for training? Specifically, when you sample 1/N data, then would you train the model for N epoch longer to keep the same iterations or just keep the same epoch? I am not familiar with the common practice in this domain. Hope for your answer~

ldkong1205 commented 1 year ago

By the way, I have another question. That is, for the experiments on low-annotation training setting, what's the typical protocol for training? Specifically, when you sample 1/N data, then would you train the model for N epoch longer to keep the same iterations or just keep the same epoch? I am not familiar with the common practice in this domain. Hope for your answer~

Hi @FrontierBreaker, the question you raised is a very good one.

The common practice for setting the number of training epochs is basically based on empirical experiments. In this work, we follow the configuration in CPS, a semi-supervised image segmentation method, which set the different number of epochs for different splits.

In our task, suppose that a fully-supervised method will need $k$ epochs for training. We first expand the labeled token list to an equal number of the unlabeled token list. For example, the 10% labeled token list will be expanded 9 times. Then, the training epoch number is set based on this expanded list. The number of epochs for the 1%, 10%, 20%, and 50% splits are $0.5k$, $k$, $1.5k$, $2k$, respectively.

Please note that this kind of choice is empirical. Since we are using the OneCycle learning rate scheduler, the best possible models tend to appear near the end of training. However, the optimal number of epochs is still unknown. Our configuration maintains the training cost to a level that is similar to the fully-supervised scenario, and we empirically find that setting these training epoch numbers yields fairly good results.

Hope the above answers your question~

FrontierBreaker commented 1 year ago

Thank you for your very detailed answer! My problem has been resolved well!