Open cs-mshah opened 1 year ago
Thanks for the feedback. Right now the performance improvement is in our improvement roadmap and it should be included in the next major update.
The key issue with these slower augmentations is the noise generation process. So, we will use this issue to focus on approaches to speed noise generation while retaining an essential level of random variation in the distortions.
These augmentations should all be improved once we can improve the noise generation process:
We've recently released a performance improvement via #270 which included use of Numba to optimize loops. However, we found there remain a lot of opportunity to improve the noise generation processes which most heavily impact augmentation performance.
See greater than 100% performance improvements from recent Augraphy updates: https://github.com/sparkfish/augraphy/issues/270#issuecomment-1502517272
Augmentations that rely on perlin noise generation are particularly slow, including Letterpress and others.
It would be great if the augmentations taking more time can be made more efficient/leverage GPU as it is too slow to practically use the bottom ones in the list for training.
I tried to train a model using letterpress and found that its one epoch was taking 12x more time than without applying the augmentation. I timed most augmentations on augmenting 7 images and here are the results:
Here is the code for timing: