faris-k / fastsiam-wafers

Self-Supervised Representation Learning of Wafer Maps with FastSiam
MIT License
6 stars 0 forks source link

Use Albumentations instead of Torchvision Transforms #4

Closed faris-k closed 1 year ago

faris-k commented 1 year ago

Minor enhancement, but since speed is a concern with self-supervised pretraining, use Albumentations instead of torchvision and lightly's transforms. Throughput is reportedly much higher: https://github.com/albumentations-team/albumentations#benchmarking-results

faris-k commented 1 year ago

Albumentations probably won't work with lightly's collate functions, so to use albumentations over torchvision would probably require an entire rework of all the custom collate functions here. Instead, it would be better to re-implement whatever we need from albumentations as torchvision transforms. One that comes to mind is the OneOf transform, which would be super useful for using DPWTransform and DieNoise (since both probably shouldn't be used at once, or they could but with a very low chance).