Closed Gofinge closed 1 year ago
hi, it shares the same pre-trained model. We always uses 2cm voxel size in the pre-training regardless of downstream voxel sizes.
That's a thought-provoking conclusion. Previously, I held the belief that pre-training weights could not be shared across downstream tasks with varying voxel sizes. However, the potential for sharing pre-training weights could impart greater significance to the pre-training process. Thanks for your insightful response.
Hi, thanks again for your excellent work and I have a question about the pre-trained model for S3DIS. Since S3DIS adopts a different voxel size than ScanNet, does it share the same pre-trained model with model weight for ScanNet fine-tuning? If not, it would be really helpful if you could share the pre-training details for S3DIS.