Open MuhammedSaygili opened 22 hours ago
Just a moment : I am going to look to your request.
Note: Playing from a Stream is really what you must do.
I am not familiar with your UDP server, but I think that if you receive your packets as Uint8List, there is nor reason to convert them to integer and then convert your integers back to UInt8List for Flutter Sound.
There are two main formats for PCM audio :
Actually Flutter Sound just handle Int16. But you are in luck if your server send you Int16
Flutter Sound request a Stream UInt8List, just as a way to handle binary data. (with two Uint8 for each Int16). I have the feeling that it is the same for your server
You probably not have to bother with BigEndian and littleEndian. Nowadays, almost everybody use LittleEndian. So don't care with that
First of all, thank you very much. To amplify the sound, I converted it to 16-bit and then reduced it, though I am fully aware that this is completely ridiculous. As you mentioned, my server sends 16-bit audio as 8-bit LSB and MSB. However, I am unable to manage the streaming process correctly.
Thank you very much. I also tried the exact opposite, meaning I wanted to send audio captured from the microphone in real-time over UDP, but it didn't work with the example code either. I even added a runApp to the recordToStream code and set the necessary permissions through info.plist and Podfile, but I can't record; the recording doesn't start. I tried this with recordToStream, but there's an error: the toStream parameter is of type Food, but StreamController is of type Uint8List, which causes an error.
Future<void> record() async {
assert(_mRecorderIsInited && _mPlayer!.isStopped);
var sink = await createFile();
var recordingDataController = StreamController<Uint8List>();
_mRecordingDataSubscription =
recordingDataController.stream.listen((buffer) {
sink.add(buffer);
});
await _mRecorder!.startRecorder(
toStream: recordingDataController.sink,
codec: Codec.pcm16,
numChannels: 2,
sampleRate: 44100,
bufferSize: 8192,
);
setState(() {});
}
I'm trying to solve it this way, but I'm not achieving any results:
Future<void> record() async {
assert(_mRecorderIsInited && _mPlayer!.isStopped);
var sink = await createFile();
var recordingDataController = StreamController<Food>();
_mRecordingDataSubscription =
recordingDataController.stream.listen((food) {
if (food is FoodData) {
sink.add(food.data!);
// If you want, you can process the data in real-time here
}
});
await _mRecorder!.startRecorder(
toStream: recordingDataController.sink,
codec: Codec.pcm16,
numChannels: 2,
sampleRate: sampleRate,
bitRate: 1411200, // 44100 * 16 bit * 2 channels
);
setState(() {});
}
What is exactly your issue ? A problem with the CPU overload and latency ?
I think that it will be hard to solve that, because at 10 000 Hz, your receive VERY MUCH data. You want to process them byte per byte and it is MUCH work.
You could do that inside the iOS/Android code itself, but it will be difficult to achieve that. I don't think that you should consider that solution.
What is exactly your issue ? A problem with the CPU overload and latency ?
I think that it will be hard to solve that, because at 10 000 Hz, your receive VERY MUCH data. You want to process them byte per byte and it is MUCH work.
You could do that inside the iOS/Android code itself, but it will be difficult to achieve that. I don't think that you should consider that solution.
My main problem isn't the delay itself—the delay is very short. The code I provided essentially operates without streaming, meaning I believe there's a delay because I stop and start playback each time. I haven't been able to convert the packets received from UDP into sound using a stream, which is the solution I actually need.
OK. I am going to look better
Also, the sound comes in single channel.
Your code is incorrect : it's impossible to open a Player for each packet. You definitely need to open a player from Stream and just feed the stream when you have handled your packet
Recently I updated Flutter Sound to handle stereo with streams. But you will not be able to have more than 2 channels on Android
Your code is incorrect : it's impossible to open a Player for each packet. You definitely need to open a player from Stream and just feed the stream when you have handled your packet
I tried this but I couldn't get any sound from the device. I guess I didn't do it right and deleted the codes. I'll try again.
Recently I updated Flutter Sound to handle stereo with streams. But you will not be able to have more than 2 channels on Android
This is not my problem at all
Is it possible for you to share resources that can help me? My main profession is electronics and embedded software, so I'm having a hard time.
You can look to the example "Play From Stream" . It is very simple. Just a few lines of code. If you have problems to play your stream, I will give you some help to debug. I am confident that you will success, because you are actually able to have some sound when you play from buffer.
There is two variants of the example
you are concerned by the "without back pressure" case because you probably not control the UDP server. You don’t have a protocol to tell your server when you are ready to play but your buffers are almost empty.
It will be simpler for you. Internally, the Dart stream will buffer the data if the server send data faster than you can play them. If the server don’t send the data fast enough, you will have some blanks. You can’t do anything if your server is too slow, of course.
If the server is slow or too fast, I can adjust it with the sampling rate. Also, the server is actually an electronic hardware and sends sound in real time by doing analog sampling. The sending rate is stable and always the same. It can also tell when it will start and when it will end.
if you adjust the sample rate, you will alter the pitch: I suggest that first, you will not care of the back-pressure and you will see this problem later.
I'm asking to confirm—if I understood correctly, you recommend using "without back pressure" because "with back pressure" is suitable for continuous streams and would cause gaps for streams coming from UDP, right?
Back pressure is when you have a protocol to tell the server that your buffers are almost empty and you want some more data. In this case, the server is completely handled by your app.
There are several way to solve the problem of synchronization between the server and the app. If your audio sessions are short (a dozen of seconds), and if it is OK for your app, you can for example wait a short time (100 ms, for example) before starting your playback. The audio data will be buffered by your stream during those 100ms, and you will be sure to have enough data without being too short on your buffers.
You know your app better than me, of course.
Synchronization between two machines needs a protocol. Synchronization is necessary if your sessions are long, and you don't want any latency. If your server is too fast, after some times, you have the risk to have more and more data waiting in your buffers and the latency will be more and more important. If your server is too slow, you risk to be short on buffers and you risk to have nothing to play from time to time.
It's actually a very sound approach, but synchronization with the server is somewhat challenging. However, I managed to resolve the issue using the "without back pressure" example. There's only a minor problem, which might be somewhat related to the audio and could also be caused by the server. Thank you very much.
Additionally, I'm experiencing problems with capturing audio from the microphone via a stream. There's an error in the recordToStream example that I mentioned above. How should I proceed regarding this?
Yes, I am going to see that. Probably something very simple to fix. Just a moment...
I tried this with recordToStream, but there's an error: the toStream parameter is of type Food, but StreamController is of type Uint8List, which causes an error.
This is weird : your code is correct. The Stream is really Uint8List in the recent versions of Flutter Sound. By the past, it was of type Food. but not anymore. I guess that you use a modern version of Flutter Sound, so I don't understand why you have any compilation problem.
btw the version i used is ^9.2.13
Your version is too old. Why do you use a so old version ?
Actually, I was using a newer version, but to ensure it worked more accurately, I based it on the pubspec file from the old example folder. I just realized now that the version is outdated; it had slipped my mind.
I updated my version and will be switching to microphone streaming shortly, but I noticed that there are no issues with the UDP server and the audio. I hope that my problem becomes evident in this MP3 file. When I gather the same audio data from the terminal and convert it to a WAV file using Python, there are no problems with the audio.
I added UDP reception to the "play stream without back pressure" example, and the received UDP data is being added in this way.
feedHim(datagram.data);
//if (_mPlayer != null) {
// We must not do stopPlayer() directely //await stopPlayer();
_mPlayer.foodSink!.add(FoodEvent(() async {
//await _mPlayer.stopPlayer();
//FlutterSoundPlayer().logger.i("MARKER!");
setState(() {});
}));
Every thing ok, now?
Hi @Larpoux , Firstly, I am somewhat new and inexperienced with Flutter. I want to play audio data received over UDP, but I haven't succeeded yet. The incoming audio is 10 kHz and 16-bit. Initially, since streaming didn't come to mind, I tried playing it in parts as a buffer. Later, I attempted streaming but was unable to make it work. The audio packets arriving over UDP are coming as 200 200. Because it's 8-bit, I believe they are arriving as 400 400. In the code example below, I managed to convert the incoming packets to sound, but there are many interruptions and a slight delay. Could you help me with this?