Closed BakerBunker closed 1 year ago
Hi @BakerBunker
It is already possible to obtain the selected/applied transform parameters, like this:
from audiomentations import Compose, AddGaussianNoise, TimeStretch, PitchShift, Shift
import numpy as np
augment = Compose([
AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=0.5),
TimeStretch(min_rate=0.8, max_rate=1.25, p=0.5),
PitchShift(min_semitones=-4, max_semitones=4, p=0.5),
Shift(min_fraction=-0.5, max_fraction=0.5, p=0.5),
])
# Generate 2 seconds of dummy audio for the sake of example
samples = np.random.uniform(low=-0.2, high=0.2, size=(32000,)).astype(np.float32)
# Augment/transform/perturb the audio data
augmented_samples = augment(samples=samples, sample_rate=16000)
for transform in augment.transforms:
print(f"{transform.__class__.__name__}: {transform.parameters}")
# AddGaussianNoise: {'should_apply': True, 'amplitude': 0.0027702725003923272}
# TimeStretch: {'should_apply': True, 'rate': 1.158377360016495}
# PitchShift: {'should_apply': False}
# Shift: {'should_apply': False}
I have added a separate issue to document it on audiomentation's documentation website
then the returned
information
should contain the information of transformation, for example: