Open sishyanet opened 3 years ago
This is a duplicate of #146 but I'll point the other issue to here since your description is quite clear and comprehensive.
I have just implemented the Android side on the dev
branch.
The Android implementation is now published in release 0.8.0.
Thanks, much appreciated! Would you be interested in an iOS implementation PR? I did some research and it looks like the easiest way to do this may be to send a AVAudioPCMBuffer with the appropriate number of zero-valued samples to an AVAudioPlayerNode for playback with scheduleBuffer(). If you don't have the bandwidth to try this out, I can take a crack and it.
Copying comment from #507
On a separate note given your great work on the justaudio package it'd be awesome if it supported raw PCM playback by optionally setting bit/sample flags on incoming raw pcm? Useful for people recording raw pcm then using your package for all its out of the box features. I don't think the exoplayer supports raw under the hood
I wrote a 1-page wav-header creator when I used your package a while ago to convert my pcm/raw file to wave to playback in justaudio. Let me know if you want it.
Hi @sidetraxaudio this sounds really great for a number of things, for instance this issue. I don't know of any better way to leverage AVQueuePlayer to insert silence of a certain duration other than to dynamically generate a WAV. I had planned to use AVAssetResourceLoaderDelegate for this.
The other use case for this would be #280 (although this is technically already doable via StreamAudioSource
.)
Hello guys, may I ask if there is a better solution for this problem, I also encountered the same problem.
I think @sidetraxaudio has a solution if he is able to share it. I think it is a good solution to be able to generate a WAV file at runtime, and all-zeros if you just want silence.
Otherwise, what you can do is create your own silent mp3 file of the right duration and add that to your project as an asset. But it would still be a good first PR if someone would like to contribute some more dynamic code to generate this on the fly.
@sishyanet I'm sorry I had completely missed your comment, but yes that would actually be a good solution too. I guess it does mean that it might be harder to replicate on Linux and Windows since we'd need a native implementation for each platform.
So perhaps the most flexible approach is to still create a StreamAudioSource
that outputs as much silence as is needed, dynamically, and that just requires outputting audio in an encoded audio format. WAV is probably the easiest one to do.
Hi, will SilenceAudioSource be implemented for iOS or StreamAudioSource will be the way forward? Thank you
In terms of official support, my preference is probably to implement it natively at the iOS level, and I am always open to a pull request from anyone who would like to implement that.
Although in the meantime, StreamAudioSource
permits anyone to take this into their own hands using the current API. Although if anyone does that, please also consider sharing the code for others.
I hope that SilenceAudioSource
gets implemented for iOS too as that is so easy to use. Also, I still haven't cracked the code on how to implement this through StreamAudioSource
. My use case is that the duration is dynamic and based on a user setting. Implementing that with StreamAudioSource
and a timer is too brittle IMO.
Implementing that with
StreamAudioSource
and a timer is too brittle IMO.
How so? Both ways of implementing this will work equally as well in theory, and of course either way, any new code will be brittle at first until it's gone through testing and polishing.
It's brittle since the duration is decoupled from the StreamAudioSource
. My use case also lets the user pause or play when playing the silence. I have to manage the timer on top of managing the player in this case.
There is absolutely no behavioural difference between a native implementation of SilenceAudioSource
and one based on StreamAudioSource
. There is no decoupling, since you would create a subclass of StreamAudioSource
and encapsulate everything within it, including a duration parameter. Whether you pause any audio source, it works the same way, and there is nothing special about pausing behaviour in any type of audio source. That is to say, pausing a native SilenceAudioSource
would have the same exact effect as pausing a subclass of StreamAudioSource
that emits silence.
This thread is quite old, but still very relevant to me. I've been trying to wrap my head around how to avoid jitter when dealing with a long silent track, e.g. 30+ minutes. My use-case is quite different from the one described here, but nevertheless it requires having a way to set a silent audio source that loads immediately and takes a fixed amount of memory no matter the duration.
In case it helps, here's one way of implementing it - which works great for short silent tracks.
class SilenceAudioSourceIos extends StreamAudioSource {
late final Uint8List _header;
late final int _trackLength;
late final int _streamLength;
SilenceAudioSourceIos({required Duration duration, super.tag})
: _header = _createWavHeader(duration),
_trackLength = _calculateByteLength(duration) {
_streamLength = _trackLength + _header.length;
}
/// Creates a WAV file header.
static Uint8List _createWavHeader(Duration duration) {
int sampleRate = 44100;
int channels = 2;
int bitsPerSample = 16;
int subchunk2Size =
duration.inSeconds * sampleRate * channels * (bitsPerSample ~/ 8);
int chunkSize = 36 + subchunk2Size;
var header = Uint8List(44);
var writer = ByteData.sublistView(header);
// RIFF header
writer.setUint32(0, 0x46464952, Endian.little); // "RIFF"
writer.setUint32(4, chunkSize, Endian.little);
writer.setUint32(8, 0x45564157, Endian.little); // "WAVE"
// Subchunk1 (format)
writer.setUint32(12, 0x20746D66, Endian.little); // "fmt "
writer.setUint32(16, 16, Endian.little); // Subchunk1 size
writer.setUint16(20, 1, Endian.little); // PCM format
writer.setUint16(22, channels, Endian.little);
writer.setUint32(24, sampleRate, Endian.little);
writer.setUint32(
28, sampleRate * channels * (bitsPerSample ~/ 8), Endian.little);
writer.setUint16(32, channels * (bitsPerSample ~/ 8), Endian.little);
writer.setUint16(34, bitsPerSample, Endian.little);
// Subchunk2 (data)
writer.setUint32(36, 0x61746164, Endian.little); // "data"
writer.setUint32(40, subchunk2Size, Endian.little);
return header;
}
/// Calculates the byte-length of a silent track of [duration].
static int _calculateByteLength(Duration duration) {
int sampleRate = 44100;
int channels = 2;
int bitsPerSample = 16;
return duration.inSeconds * sampleRate * channels * (bitsPerSample ~/ 8);
}
@override
Future<StreamAudioResponse> request([int? start, int? end]) async {
start ??= 0;
end ??= _streamLength;
// SparseList is a custom implementation of a List that avoids storing
// all its data in memory
final bytes = SparseList<int>(end - start, 0);
if (start < _header.length) {
bytes.setRange(
start,
_header.length,
_header.sublist(start, _header.length),
);
}
return StreamAudioResponse(
sourceLength: _streamLength,
contentLength: end - start,
offset: start,
stream: Stream.value(bytes),
contentType: 'audio/wav',
);
}
}
I fixed this by mimicking an HLS file. In case anyone would benefit from this I'll paste my code. However, bear in mind that I took a shortcut by storing the silent tracks in my assets. I only needed to save 10 files, for up to 10 seconds (e.g. 1s.ts, 2s.ts, ..., 10s.ts). These files are tiny.
High level flow:
.m3u8
and .ts
.duration_sec
query param..m3u8
request comes in, it generates the manifest based on duration_sec
. It uses relative paths for each segment to ensure all other requests come back to the proxy server. It appends the duration_sec
query param for each segment (to avoid saving states)..ts
request comes in, it returns one of the premade silent tracks stored in the app assets
directory.Proxy server
/// Proxy server that runs locally and listens on
/// [SilenceAudioProxyServer.address]:[SilenceAudioProxyServer.port] to handle
/// requests to stream silent tracks using HLS.
///
/// This server works together with [SilenceAudioSourceIosProxy].
class SilenceAudioProxyServer {
SilenceAudioProxyServer._internal();
factory SilenceAudioProxyServer() {
return _instance;
}
static final SilenceAudioProxyServer _instance =
SilenceAudioProxyServer._internal();
/// The address this proxy server will listen to.
static String address = InternetAddress.loopbackIPv4.address;
/// The port this proxy server will listen to.
static const int port = 478;
/// The duration of each segment in the m3u8 file.
static const int segmentDuration = 10;
HttpServer? _server;
bool _running = false;
/// Start the server if it is not already running.
Future<dynamic> ensureRunning() async {
if (_running) return;
return await start();
}
/// Starts the server.
Future<dynamic> start() async {
await stop();
_running = true;
_server = await HttpServer.bind(InternetAddress.loopbackIPv4, port);
_server!.listen((request) async {
if (request.method == 'GET') {
final file = request.uri.pathSegments.last;
final fileType = file.split('.').last;
final durationSec = int.tryParse(
request.uri.queryParameters['duration_sec'] ?? '',
);
if (durationSec == null) {
request.response.statusCode = 400;
request.response.write('Missing "duration_sec" query param');
request.response.close();
return;
}
switch (fileType) {
case 'm3u8':
_returnManifest(request, durationSec);
break;
case 'ts':
_returnSilentAudio(request, durationSec);
break;
default:
request.response.statusCode = 404;
request.response.write(
'$file was not found. Can only handle ".m38u" and ".ts" file types',
);
request.response.close();
}
}
}, onDone: () {
_running = false;
}, onError: (Object e, StackTrace st) {
_running = false;
});
}
/// Stops the server
Future<dynamic> stop() async {
if (!_running) return;
_running = false;
return await _server?.close();
}
/// Returns an m3u8 manifest file of [durationSec].
void _returnManifest(HttpRequest request, int durationSec) {
int numSegments = (durationSec / segmentDuration).floor();
int lastSegmentDuration = durationSec % segmentDuration;
var manifest = '#EXTM3U\n';
manifest += '#EXT-X-VERSION:3\n';
manifest += '#EXT-X-TARGETDURATION:$segmentDuration\n';
manifest += '#EXT-X-MEDIA-SEQUENCE:0\n';
for (var i = 0; i < numSegments; i++) {
manifest += '#EXTINF:$segmentDuration,\n';
manifest += 'stream$i.ts?duration_sec=$segmentDuration\n';
}
if (lastSegmentDuration > 0) {
manifest += '#EXTINF:$lastSegmentDuration,\n';
manifest += 'stream$numSegments.ts?duration_sec=$lastSegmentDuration\n';
}
manifest += '#EXT-X-ENDLIST\n';
request.response
..headers.contentType = ContentType('audio', 'mpegurl')
..write(manifest)
..close();
}
/// Returns a silent track of up to 10 seconds.
void _returnSilentAudio(HttpRequest request, int durationSec) async {
final assetPath = 'assets/audio/${durationSec}s.ts';
try {
final data = await rootBundle.load(assetPath);
List<int> bytes = data.buffer.asUint8List();
request.response
..headers.contentType = ContentType('audio', 'mp3')
..add(bytes)
..close();
} catch (e) {
request.response
..statusCode = 404
..write('Audio segment not found')
..close();
}
}
}
Silence Audio Source for IOS
/// A locally-served HLS audio of a silent track of variable length.
///
/// Expects [SilenceAudioProxyServer] to be running.
class SilenceAudioSourceIosProxy extends HlsAudioSource {
SilenceAudioSourceIosProxy({
required Duration duration,
dynamic tag,
}) : super(
Uri.http(
'${SilenceAudioProxyServer.address}:${SilenceAudioProxyServer.port}',
'/manifest.m3u8',
{'duration_sec': duration.inSeconds.toString()},
),
duration: duration,
tag: tag,
) {
SilenceAudioProxyServer().ensureRunning();
}
}
If just_audio
were to support this use-case in its already running proxy server, this could have been as easy as just the SilenceAudioSourceIosProxy
.
If you decide to use this solution, make sure that you close the proxy server when you dispose of your resources.
This thread is quite old, but still very relevant to me. I've been trying to wrap my head around how to avoid jitter when dealing with a long silent track, e.g. 30+ minutes. My use-case is quite different from the one described here, but nevertheless it requires having a way to set a silent audio source that loads immediately and takes a fixed amount of memory no matter the duration.
In case it helps, here's one way of implementing it - which works great for short silent tracks.
class SilenceAudioSourceIos extends StreamAudioSource { late final Uint8List _header; late final int _trackLength; late final int _streamLength; SilenceAudioSourceIos({required Duration duration, super.tag}) : _header = _createWavHeader(duration), _trackLength = _calculateByteLength(duration) { _streamLength = _trackLength + _header.length; } /// Creates a WAV file header. static Uint8List _createWavHeader(Duration duration) { int sampleRate = 44100; int channels = 2; int bitsPerSample = 16; int subchunk2Size = duration.inSeconds * sampleRate * channels * (bitsPerSample ~/ 8); int chunkSize = 36 + subchunk2Size; var header = Uint8List(44); var writer = ByteData.sublistView(header); // RIFF header writer.setUint32(0, 0x46464952, Endian.little); // "RIFF" writer.setUint32(4, chunkSize, Endian.little); writer.setUint32(8, 0x45564157, Endian.little); // "WAVE" // Subchunk1 (format) writer.setUint32(12, 0x20746D66, Endian.little); // "fmt " writer.setUint32(16, 16, Endian.little); // Subchunk1 size writer.setUint16(20, 1, Endian.little); // PCM format writer.setUint16(22, channels, Endian.little); writer.setUint32(24, sampleRate, Endian.little); writer.setUint32( 28, sampleRate * channels * (bitsPerSample ~/ 8), Endian.little); writer.setUint16(32, channels * (bitsPerSample ~/ 8), Endian.little); writer.setUint16(34, bitsPerSample, Endian.little); // Subchunk2 (data) writer.setUint32(36, 0x61746164, Endian.little); // "data" writer.setUint32(40, subchunk2Size, Endian.little); return header; } /// Calculates the byte-length of a silent track of [duration]. static int _calculateByteLength(Duration duration) { int sampleRate = 44100; int channels = 2; int bitsPerSample = 16; return duration.inSeconds * sampleRate * channels * (bitsPerSample ~/ 8); } @override Future<StreamAudioResponse> request([int? start, int? end]) async { start ??= 0; end ??= _streamLength; // SparseList is a custom implementation of a List that avoids storing // all its data in memory final bytes = SparseList<int>(end - start, 0); if (start < _header.length) { bytes.setRange( start, _header.length, _header.sublist(start, _header.length), ); } return StreamAudioResponse( sourceLength: _streamLength, contentLength: end - start, offset: start, stream: Stream.value(bytes), contentType: 'audio/wav', ); } }
doesnt work for me , its says unsupported type , can you share dependency for SparseList if it's possible
Sure - but this is a very rough half-baked implementation of a "sparse list" that I used while testing this solution. You may need to implement yourself some of the methods there I left untouched. Having said that, it should still work for this specific use-case. And again, remember that this solution is only good for short silent tracks. You can also tweak the sampleRate & channels to make it even faster, but only to a certain degree. For any length silent tracks, I recommend you use my other solution that requires a bit more lifting but is bulletproof.
With that, here's the half-baked SparseList
:
import 'dart:math';
class SparseList<T> implements List<T> {
final int _length;
final Map<int, T> _values = {};
final T _defaultValue;
SparseList(this._length, this._defaultValue)
: first = _defaultValue,
last = _defaultValue;
@override
T operator [](int index) {
if (index >= length || index < 0) {
throw RangeError.index(index, this, 'index', null, length);
}
return _values[index] ?? _defaultValue;
}
@override
void operator []=(int index, T value) {
if (index >= length || index < 0) {
throw RangeError.index(index, this, 'index', null, length);
}
if (value == _defaultValue) {
_values.remove(index);
} else {
_values[index] = value;
}
}
@override
T first;
@override
T last;
@override
List<T> operator +(List<T> other) {
return SparseList(length + other.length, _defaultValue);
}
@override
void add(T value) {
throw UnsupportedError("This list is fixed in size.");
}
@override
void addAll(Iterable<T> iterable) {
throw UnsupportedError("This list is fixed in size.");
}
@override
bool any(bool Function(T element) test) {
return test(_defaultValue);
}
@override
Map<int, T> asMap() {
return <int, T>{}..addAll(_values);
}
@override
List<R> cast<R>() {
throw UnimplementedError();
}
@override
void clear() {
_values.clear();
}
@override
bool contains(Object? element) {
return element == _defaultValue || _values.values.contains(element);
}
@override
T elementAt(int index) {
return this[index];
}
@override
bool every(bool Function(T element) test) {
return _values.values.every(test) && test(_defaultValue);
}
@override
Iterable<T0> expand<T0>(Iterable<T0> Function(T element) toElements) {
throw UnsupportedError("This list is fixed in size.");
}
@override
T firstWhere(bool Function(T element) test, {T Function()? orElse}) {
return _values.values.firstWhere(test, orElse: () => _defaultValue);
}
@override
T0 fold<T0>(
T0 initialValue, T0 Function(T0 previousValue, T element) combine) {
throw UnsupportedError("This list is fixed in size.");
}
@override
Iterable<T> followedBy(Iterable<T> other) {
throw UnsupportedError("This list is fixed in size.");
}
@override
void forEach(void Function(T element) action) {
for (var i = 0; i < length; i++) {
action(this[i]);
}
}
@override
void insert(int index, T element) {
this[index] = element;
}
@override
void insertAll(int index, Iterable<T> iterable) {
if (index >= length || index < 0) {
throw RangeError.index(index, this, 'index', null, length);
}
var iterator = iterable.iterator;
for (var i = index; true; i++) {
if (i >= length) {
throw StateError('Too many elements in the iterable.');
}
this[i] = iterator.current;
iterator.moveNext();
}
}
@override
bool get isEmpty => false;
@override
bool get isNotEmpty => true;
@override
Iterator<T> get iterator => SparseListIterator(
_length,
_values,
_defaultValue,
);
@override
String join([String separator = ""]) =>
throw UnsupportedError('Not supported by sparse list');
@override
int indexOf(T element, [int start = 0]) {
if (_values.containsValue(element)) {
return _values.values.toList().indexOf(element, start);
} else if (element == _defaultValue) {
final keys = _values.keys.toList();
keys.sort();
return keys.last + 1;
}
return -1;
}
@override
int indexWhere(bool Function(T element) test, [int start = 0]) {
final index = _values.values.toList().indexWhere(test, start);
if (index == -1 && test(_defaultValue)) {
final keys = _values.keys.toList();
keys.sort();
return keys.last + 1;
}
return -1;
}
@override
int lastIndexOf(T element, [int? start]) {
final index = _values.values.toList().lastIndexOf(element, start);
if (index == -1 && element == _defaultValue) {
final keys = _values.keys.toList();
keys.sort();
return keys.last + 1;
}
return -1;
}
@override
int lastIndexWhere(bool Function(T element) test, [int? start]) {
final index = _values.values.toList().lastIndexWhere(test, start);
if (index == -1 && test(_defaultValue)) {
final keys = _values.keys.toList();
keys.sort();
return keys.last + 1;
}
return -1;
}
@override
T lastWhere(bool Function(T element) test, {T Function()? orElse}) {
return _values.values.toList().lastWhere(test, orElse: () => _defaultValue);
}
@override
set length(int newLength) {
throw UnsupportedError('Not supported by sparse list');
}
@override
Iterable<T0> map<T0>(T0 Function(T e) toElement) {
// TODO: implement map
throw UnimplementedError();
}
@override
T reduce(T Function(T value, T element) combine) {
// TODO: implement reduce
throw UnimplementedError();
}
@override
bool remove(Object? value) {
// TODO: implement remove
throw UnimplementedError();
}
@override
T removeAt(int index) {
// TODO: implement removeAt
throw UnimplementedError();
}
@override
T removeLast() {
// TODO: implement removeLast
throw UnimplementedError();
}
@override
void removeRange(int start, int end) {
// TODO: implement removeRange
}
@override
void removeWhere(bool Function(T element) test) {
// TODO: implement removeWhere
}
@override
void replaceRange(int start, int end, Iterable<T> replacements) {
// TODO: implement replaceRange
}
@override
void retainWhere(bool Function(T element) test) {
// TODO: implement retainWhere
}
@override
// TODO: implement reversed
Iterable<T> get reversed => throw UnimplementedError();
@override
void setAll(int index, Iterable<T> iterable) {
// TODO: implement setAll
}
@override
void setRange(int start, int end, Iterable<T> iterable, [int skipCount = 0]) {
if (start < 0 || start > _length || end < start || end > _length) {
throw RangeError.range(start, end, _length);
}
var iterator = iterable.skip(skipCount).iterator;
for (var i = start; i < end; i++) {
if (!iterator.moveNext()) {
throw StateError('Not enough elements in the iterable.');
}
this[i] = iterator.current;
}
}
@override
void fillRange(int start, int end, [T? fillValue]) {
if (start < 0 || start > _length || end < start || end > _length) {
throw RangeError.range(start, end, _length);
}
for (var i = start; i < end; i++) {
this[i] = fillValue ?? _defaultValue;
}
}
@override
Iterable<T> getRange(int start, int end) {
throw UnsupportedError('Can\'t get range of sparse list');
}
@override
void shuffle([Random? random]) {
throw UnsupportedError('Not supported by sparse list');
}
@override
T get single => _defaultValue;
@override
T singleWhere(bool Function(T element) test, {T Function()? orElse}) {
// TODO: implement singleWhere
throw UnimplementedError();
}
@override
Iterable<T> skip(int count) {
final copy = SparseList<T>(_length - count, _defaultValue);
if (count < _values.length) {
copy.setRange(0, count, _values.entries.skip(count).map((e) => e.value));
}
return copy;
}
@override
Iterable<T> skipWhile(bool Function(T value) test) {
// TODO: implement skipWhile
throw UnimplementedError();
}
@override
void sort([int Function(T a, T b)? compare]) {
// TODO: implement sort
}
@override
List<T> sublist(int start, [int? end]) {
// TODO: implement sublist
throw UnimplementedError();
}
@override
Iterable<T> take(int count) {
final copy = SparseList<T>(count, _defaultValue);
copy.setRange(
0,
min(_values.length, count),
_values.entries.take(count).map((e) => e.value),
);
return copy;
}
@override
Iterable<T> takeWhile(bool Function(T value) test) {
// TODO: implement takeWhile
throw UnimplementedError();
}
@override
List<T> toList({bool growable = true}) {
// TODO: implement toList
throw UnimplementedError();
}
@override
Set<T> toSet() {
// TODO: implement toSet
throw UnimplementedError();
}
@override
Iterable<T> where(bool Function(T element) test) {
// TODO: implement where
throw UnimplementedError();
}
@override
Iterable<T0> whereType<T0>() {
// TODO: implement whereType
throw UnimplementedError();
}
@override
int get length => _length;
}
class SparseListIterator<T> implements Iterator<T> {
final int _length;
final Map<int, T> _values;
final T _defaultValue;
int position = 0;
SparseListIterator(this._length, this._values, this._defaultValue);
@override
get current {
if (_values.containsKey(position)) {
return _values[position]!;
} else {
return _defaultValue;
}
}
@override
bool moveNext() {
position++;
return position < _length;
}
}
My two cents on this issue. Not sure if it has drawbacks other than not being perfectly precise, but it seems to work for me. In debug, when I increase the precision, there was a crash, I suppose because the track was finished while it was still starting. Didn't investigate too far as I don't need less than 100ms precision.
import 'package:just_audio/just_audio.dart';
class SilenceAudioSourceIOS {
static const Map<int, String> silenceFiles = {
/*1: 'assets/audio/silence/1-millisecond-of-silence.mp3',
2: 'assets/audio/silence/2-milliseconds-of-silence.mp3',
5: 'assets/audio/silence/5-milliseconds-of-silence.mp3',
10: 'assets/audio/silence/10-milliseconds-of-silence.mp3',
50: 'assets/audio/silence/50-milliseconds-of-silence.mp3',*/
100: 'assets/audio/silence/100-milliseconds-of-silence.mp3',
250: 'assets/audio/silence/250-milliseconds-of-silence.mp3',
500: 'assets/audio/silence/500-milliseconds-of-silence.mp3',
1000: 'assets/audio/silence/1-second-of-silence.mp3',
2000: 'assets/audio/silence/2-seconds-of-silence.mp3',
5000: 'assets/audio/silence/5-seconds-of-silence.mp3',
10000: 'assets/audio/silence/10-seconds-of-silence.mp3',
30000: 'assets/audio/silence/30-seconds-of-silence.mp3',
45000: 'assets/audio/silence/45-seconds-of-silence.mp3',
};
static List<AudioSource> getSilenceSources(int durationMs) {
List<AudioSource> sources = [];
int remainingDuration = durationMs;
for (int duration in silenceFiles.keys.toList().reversed) {
while (remainingDuration >= duration) {
sources.add(AudioSource.asset(silenceFiles[duration]!));
remainingDuration -= duration;
}
}
return sources;
}
}
And I use it like this
List<AudioSource> sources = [];
int i = 0;
int silenceDuration = 6500;
if(Platform.isAndroid) {
SilenceAudioSource waitSource = SilenceAudioSource(duration: Duration(milliseconds: silenceDuration));
sources.insert(i, waitSource);
i++;
} else {
List<AudioSource> silenceSources = SilenceAudioSourceIOS.getSilenceSources(silenceDuration);
sources.insertAll(i, silenceSources);
i += silenceSources.length;
}
I have a tiny flac file with a long silence that's clipped down to the duration
class MySilenceAudioSource extends ClippingAudioSource {
MySilenceAudioSource({required Duration duration, super.tag})
: super(
// ffmpeg -f lavfi -i anullsrc=r=8000:cl=mono -t 3600 60min_silence.flac
child: AudioSource.asset("assets/60min_silence.flac"),
start: Duration.zero,
// need longer silence to support 60+ min
end: _minDuration(duration, const Duration(minutes: 60)),
);
static Duration _minDuration(Duration d1, Duration d2) {
return (d1 < d2) ? d1 : d2;
}
}
Is your feature request related to a problem? Please describe. I need to insert pre-determined segments of silence into the audio stream. This is typically when using ConcatenatingAudioSource and I want gaps between the sources specified there. (Think of this as the opposite of gap-less playback.)
Describe the solution you'd like A new kind of AudioSource that lets me specify a silence duration. Since ExoPlayer supports SilenceMediaSource (https://exoplayer.dev/doc/reference/com/google/android/exoplayer2/source/SilenceMediaSource.html), this should be straightforward to wire up on Android. I don't know how we'd do this on iOS.
Describe alternatives you've considered I considered two alternatives:
Additional context Add any other context or screenshots about the feature request here.