Open cdeepakroy opened 7 years ago
That is currently not easily possible. You would have to manually draw samples from the parameters inside the augmenters, e.g. something like affine.rotate.draw_samples((number_of_images,))
. Then later on you could reinstantiate the augmenters using fixed (deterministic) values for the parameters, e.g. Affine(rotate=37
)`. But that will be quite some work and for some augmenters it will be unclear what exactly the parameters are. E.g. for gaussian noise, do you save the sampled sigma value (one float value) or the whole sampled gaussian noise map (same size as the input image)? (The latter one is also generated per image on the spot, so not easily accessible.)
What's your use case for wanting this? It seems to me like you are not going to achieve more than by just using stochastic parameters directly, as saving and loading lots of values that were sampled from a probability distribution is effectively the same as sampling from it directly.
+1 here For me, the use case would be applying the same noise to a set of images
I just landed at this package and am amazed by the functionality it provides. Will be using it a lot on my computer vision + machine learning projects.
I was just trying to use it to augment the images on one of my projects and I have a suggestion for an additional feature.
The
imagaug.augmenters.Sequential
currently has anaugment_images
function that takes a bunch of images and augments each of them using randomly drawn augmentation parameters for each of its child augmenters.Instead of returning the augmented images which takes more memory is it possible to get only the randomly chosen augmentations parameters for each image?
I would like to just store these parameters along with the path to the image, and whenever i load the image, i would like to call an augment_image function with the corresponding augmentation parameters to get the augmented_image.