Open filonenkoa opened 1 year ago
It seems like the original authors have abandoned this codebase. I'm working on improving this code in my fork https://github.com/filonenkoa/fas-patchnet. Feel free to request features there. The code is not backwards compatible with the current repo. The main goal of the fork is to make this code work on multiple GPUs and with multiple datasets at once.
Improvements to the original repository
- [x] More augmentations (torchvision ➞ albumentations)
- [x] TurboJPEG support for faster image decoding
- [x] DDP support
- [x] Multiple datasets training
- [x] Utility to convert datasets
- [x] Compute FAS-related metrics (ACER, etc.)
- [x] Incorporate loss into a model (the whole inference can be exported to a single ONNX file)
- [x] Telegram reports
- [x] Compute metrics for each val dataset separately
- [x] Split validation into miltiple GPUs
- [x] Balanced sampler suitable for DDP
- [x] Conversion to ONNX
i use this fork to train, and i set backbone with resnet18, got error: return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x512 and 128x2) then,i set descriptor_size to 512,still got error: return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (16384x1 and 512x2)
i fixed it by modify F.softmax descriptor to descriptor.squeeze()
def predict(self, descriptor: Tensor) -> Tensor: if not self.__use_softmax: return self.patch_loss.amsm_loss.fc(descriptor) else: return F.softmax(self.patch_loss.amsm_loss.s * self.patch_loss.amsm_loss.fc(descriptor.squeeze()), dim=-1)
It seems like the original authors have abandoned this codebase. I'm working on improving this code in my fork https://github.com/filonenkoa/fas-patchnet. Feel free to request features there. The code is not backwards compatible with the current repo. The main goal of the fork is to make this code work on multiple GPUs and with multiple datasets at once.
Improvements to the original repository