Open ZYMoridae opened 3 years ago
I want to cut particular few seconds in the waveform data and displayed it.
OK, so could it work in the same way as Array.slice()? This code would return a new WaveformData instance containing the waveform from startIndex
to endIndex
, leaving the original waveformData
unchanged:
const segment = waveformData.slice(startIndex, endIndex);
Yes, that's thing I hope could be added.
I can't promise when I'll be able to look at this, but if you (or anyone else) wants to work on it and send a pull request, I can give some pointers.
I will give it a try. May I ask something about below piece of code.
var scale = evt.data.scale;
var amplitude_scale = evt.data.amplitude_scale;
var split_channels = evt.data.split_channels;
var audio_buffer = evt.data.audio_buffer;
var channels = audio_buffer.channels;
var output_channels = split_channels ? channels.length : 1;
var version = output_channels === 1 ? 1 : 2;
var header_size = version === 1 ? 20 : 24;
var data_length = calculateWaveformDataLength(audio_buffer.length, scale);
var total_size = header_size + data_length * 2 * output_channels;
var data_object = new DataView(new ArrayBuffer(total_size));
var scale_counter = 0;
var buffer_length = audio_buffer.length;
var offset = header_size;
var channel, i;
var min_value = new Array(output_channels);
var max_value = new Array(output_channels);
for (channel = 0; channel < output_channels; channel++) {
min_value[channel] = Infinity;
max_value[channel] = -Infinity;
}
data_object.setInt32(0, version, true); // Version
data_object.setUint32(4, 1, true); // Is 8 bit?
data_object.setInt32(8, audio_buffer.sampleRate, true); // Sample rate
data_object.setInt32(12, scale, true); // Scale
data_object.setInt32(16, data_length, true); // Length
if (version === 2) {
data_object.setInt32(20, output_channels, true);
}
for (i = 0; i < buffer_length; i++) {
var sample = 0;
if (output_channels === 1) {
for (channel = 0; channel < channels.length; ++channel) {
sample += channels[channel][i];
}
sample = Math.floor(INT8_MAX * sample * amplitude_scale / channels.length);
if (sample < min_value[0]) {
min_value[0] = sample;
if (min_value[0] < INT8_MIN) {
min_value[0] = INT8_MIN;
}
}
if (sample > max_value[0]) {
max_value[0] = sample;
if (max_value[0] > INT8_MAX) {
max_value[0] = INT8_MAX;
}
}
}
else {
for (channel = 0; channel < output_channels; ++channel) {
sample = Math.floor(INT8_MAX * channels[channel][i] * amplitude_scale);
if (sample < min_value[channel]) {
min_value[channel] = sample;
if (min_value[channel] < INT8_MIN) {
min_value[channel] = INT8_MIN;
}
}
if (sample > max_value[channel]) {
max_value[channel] = sample;
if (max_value[channel] > INT8_MAX) {
max_value[channel] = INT8_MAX;
}
}
}
}
if (++scale_counter === scale) {
for (channel = 0; channel < output_channels; channel++) {
data_object.setInt8(offset++, min_value[channel]);
data_object.setInt8(offset++, max_value[channel]);
min_value[channel] = Infinity;
max_value[channel] = -Infinity;
}
scale_counter = 0;
}
}
if (scale_counter > 0) {
for (channel = 0; channel < output_channels; channel++) {
data_object.setInt8(offset++, min_value[channel]);
data_object.setInt8(offset++, max_value[channel]);
}
}
this.postMessage(data_object);
this.removeEventListener("message", listener);
this.close();
The data format here is based on 'wav' format, am I right?
Yes, it was inspired by the wav format. There's documentation for it here.
I know this is really old, but it would be a great feature. Just like concat, sometimes you want to "trim" the duration to make the data match the duration of the player. I like the idea of slice, but would rather just have a start/end time markers.
So instead of slice()
which would return a new object, a trim()
method that removes data from the existing object?
So instead of
slice()
which would return a new object, atrim()
method that removes data from the existing object? Slice or trim would work. slice is more programming, trim is more media centric term. They both seem to have the same affect, set a start and end, returning the middle. I prefer to work in time values when adjusting the waveform data.
The distinction I was thinking of was slice()
which leaves the original waveform unchanged, vs trim()
(or some other name) which would modify the original waveform object in-place.
ahh, I could see the benefit of both in different use cases. But if I was interesting in slicing out a bunch of pieces of an original and then concatenating them back into a single waveform, I think leaving the original waveform unchanged would be best.
I've added a slice()
method, now available in v4.4.0, here's the API documentation.
We have a
.concat()
method for joining two waveforms, so it could make sense to add this. Can you describe your use case in a bit more detail, though?