Open ankur-chr opened 2 years ago
Training with such a small dataset will be very challenging. In general, you can try higher cycle consistency loss and also the identity loss to constrain the model more. You can also try adding more data augmentation such as rotation or random affine transform. This is not present in the current codebase, but it's not hard to add them in base_dataset.py
.
Hi,
Are there any guidelines for training with a small dataset? I have a use case where trainA has around 850 images and trainB has only around 50 images.
Will this be considered a good training scenario, considering trainB has very less images? How can we cater to such a use case? Will 100 epochs be good enough?