Closed ThomasArtProcessors closed 3 years ago
Hi @ThomasArtProcessors, yes, you're right! I'll start working on it and keep you posted. If you have any more suggestions for this feature, you can also leave them here.
Hi @ThomasArtProcessors, yes, you're right! I'll start working on it and keep you posted. If you have any more suggestions for this feature, you can also leave them here.
Thanks for the quick reply. I'm doing my own drawing of the values. And I feel like it would be good to have an option to normalise the values as if you have one very loud sample it will shrink the whole waveform.
Hi @ThomasArtProcessors, now you can use compressAmplitudes(<samples_per_second>)
. This method will merge Amplituda data according to param.
New version: implementation 'com.github.lincollincol:Amplituda:2.0.2'
For example: Input audio file duration: 200 seconds Raw Amplituda result: 8000 amplitudes So, 8000 [amplitudes] / 200 [seconds] = 40 [amplitudes per second]. ---- call compressAmplitudes(1) // 1 means preferred amplitudes count per second ---- Now, amplitudes will be compressed to 1 amp per second. 200 seconds = 200 amplitudes.
You can also increase param value. Here is an example screenshot with custom waveform and different samplesPerSecond:
Doc:
https://github.com/lincollincol/Amplituda#-compress-output-data
Hi again @lincollincol ! I was wondering if it was possible to get samples averaged over longer periods of time? In my current designs I need the waveform to match the screen size, no matter what length the audio is. That means the same amount of points / samples no matter the audio length. So longer audio files would for example get a sample averaged over, say, 5 seconds or more. Image a 10 minutes audio waveform that has the same amount of points as a 5 seconds clip. That means we'd need to be able to say "AVERAGE values to have 1 sample every xx seconds" Ideally this value would be a float. Like 1 sample every 10.3 seconds. Currently we can only set it to 1 sample per second minimum. Do you think it would be possible?
@ThomasArtProcessors Hello! Amplituda provides only extracted audio data and compress(custom number of samples)/cache features.
Here are some instructions, which help you draw a flexible waveform:
width
and paddings
in px
).
Most likely desired width and padding will be specified by the user via function parameters or view attrs.
spike width
= desired spike width
+ desired spike paddings
.Spikes
per canvas width - number of spikes that canvas can accommodate.
spikes
= canvas width
/ single spike size
Samples
(or amplitudes
) - list of average samples for each spike. You should divide the original amplitudes list into chunks and then average each chunk. Finally, you get average samples for each spike of your amplitude.Kotlin "pseudocode"
):
This is how your code approximately should look like:
val amplitudesList: List<Int> = Amplituda(context).process(...).amplitudesAsList()
val desiredSpikeWidth: Float = 4.px val desiredSpikePadding: Float = 2.px
val canvas: Canvas = / init canvas /
val singleSpikeWidth: Float = desiredSpikeWidth + desiredSpikePadding
val spikesPerCanvas: Int = canvas.width / singleSpikeWidth
val amplitudePerSpikeList: List
amplitudePerSpikeList.forEachIndexed { spikeIndex, spikeHeight -> drawRoundRect( brush = waveformBrush, topLeft = Offset( x = spikeIndex * singleSpikeWidth, y = canvas.height / 2F - spikeHeight / 2F // Center spikes ), size = Size( width = singleSpikeWidth, height = spikeHeight ) ) }
3. **Libraries**
* `Compose`. I have recently created [AudioWaveform](https://github.com/lincollincol/compose-audiowaveform) library for Jetpack Compose which is compatible with Amplituda. I used the instructions described above to draw the waveform. So, you can check the full code [here](https://github.com/lincollincol/compose-audiowaveform/blob/master/audiowaveform/src/main/java/com/linc/audiowaveform/AudioWaveform.kt).
* `XML`. If you're looking for an android `View` implementation, you can take a look at [WaveformSeekBar](https://github.com/massoudss/waveformSeekBar), which is also compatible with Amplituda.
<p align="center">
<img src="https://user-images.githubusercontent.com/32796762/194043951-862f6624-f667-4502-941b-0d1d3a8af73a.gif" width="25%"/>
</p>
@ThomasArtProcessors If you have any questions, I'll be glad to answer :) We can continue our conversation here (#51)
Hi! Sorry I think I explained it wrongly.
I'm already drawing the waveform and that part works fine.
The issue is that I can't average values less than one per second in the line:
Compress.withParams(Compress.AVERAGE, 1)
I was thinking of using the combination of
.chunked(amplitudesList.count() / spikesPerCanvas)
.map { it.average() }
but I was wondering if the library could provide something like:
`Compress.withParams(Compress.AVERAGE, 1_sample_every_10.3_seconds)`
But all good, I can use the chunk + map combination with the 1 sample every seconds for now.
Thanks.
Though the issue comes in when you have something like this:
@ThomasArtProcessors I have created a custom chunkedToSize()
recursive function, which filters remainder by index.
Test cases:
Case 1: Remainder > 0
100 / 80 = 1.25 (round to int = 1)
100 % 80 = 20 (remainder)
100 / 20 = 5 (remainder item index)
Case 2: Remainder > 0 AND samples.size / remainder = float value
100 / 60 = 1.6
100 % 60 = 40 (remainder)
100 / 40 = 2.5
// Run `chunkedToSize()` again until the new samples list has required size
So, If we can't average these N
items, we should remove them by index.
internal fun <T> Iterable<T>.chunkedToSize(size: Int, transform: (List<T>) -> T): List<T> {
val chunkSize = count() / size
val remainder = count() % size
val remainderIndex = ceil(count().safeDiv(remainder)).roundToInt()
val chunkedIteration = filterIndexed { index, _ ->
remainderIndex == 0 || index % remainderIndex != 0
}.chunked(chunkSize, transform)
return when (chunkedIteration.count()) {
size -> chunkedIteration
else -> chunkedIteration.chunkedToSize(size, transform)
}
}
internal fun Int.safeDiv(value: Int): Float {
return if(value == 0) return 0F else this / value.toFloat()
}
val samples = List(100) { it }
val spikes = 80
samples.map(Int::toFloat).chunkedToSize(spikes) { it.average().toFloat() }
Thanks!
I ended up doing this, which I don't know which is more efficient
totalWaveformBars
is how many samples you want.
I first run a chunked.map { average } pass, then take the amount of extra samples that remain, double that to get an index and average these by groups of two from the end of the samples.
I guess the difference is that I arbitrarily decide to average the end samples, while in your solution you delete samples evenly throughout.
private fun normaliseAmplitudes(originalSamples: List<Int>): List<Int>{
// We can't have more samples than we can fit on the width
// so the list needs to be reduced if that's the case
return if(originalSamples.count() > totalWaveformBars){
val chunkSize: Int = originalSamples.count() / totalWaveformBars
val averagedSamples = if(chunkSize == 1){
originalSamples
} else { // group and average samples if you have enough samples to do so
originalSamples.chunked(chunkSize)
.map { it.average().roundToInt() }
}
// make sure we have the exact right amount of samples
if(averagedSamples.size > totalWaveformBars) {
val finalSamples = mutableListOf<Int>()
val sizeDiff = averagedSamples.size - totalWaveformBars
// add the first few items untouched
for(i in 0 until averagedSamples.size - (sizeDiff*2)){
finalSamples.add(averagedSamples[i])
}
// add the rest averaged to get the right amount
finalSamples.addAll(
averagedSamples
.subList(averagedSamples.size - (sizeDiff*2), averagedSamples.size)
.chunked(2)
.map { it.average().toInt() }
)
finalSamples
} else {
averagedSamples
}
} else {
originalSamples
}
}
Hi there. Would it be possible to have the sampling rate as a parameter we can set? We might not need that many values. We can then do a sampling of the result but that's extra work twice: Extra sampling calculation of the audio, then the averaging of slices of the results. Thanks.