Closed oeyaugust closed 8 months ago
I will try to reproduce the issue, though I'm not sure whether I have a stereo mic currently available.
Your chosen parameters are definitely valid. The channel assignment ostensibly works as well given your output. If there is an error on your part, I'd expect it to be in the data conversion. At first glance it looks valid, though I'll have to take a look at the specs again first.
Just to be sure: audioData
is a raw list of samples, right?
I truly appreciated your prompt response. Yes, audioData is the list of raw samples. I look forward to hearing from you again, thank you.
ok, in that case the plugin just passes through the raw data from the native audio recorder. Which platform are you working on?
I work on the Android, i.e., Samsung A235N.
on another note, you can try to use the toSampleStream
transformer included with the latest version. This is my attempt to provide a generic transformer to spare you the pain of custom conversions.
Applying it on your stream gives you a dynamic stream which you can cast to a list of pairs of nums (i.e., Stream<(num, num)>
)
Thank you for your advice. I am aware about this one,
static StreamTransformer<Uint8List, dynamic> get toSampleStream => (__channelConfig == ChannelConfig.CHANNEL_IN_MONO) ? new StreamTransformer.fromHandlers(handleData: _expandUint8ListMono) : new StreamTransformer.fromHandlers(handleData: _expandUint8ListStereo);
I am assuming sample is a tuple for stereo (left, right). So, I tried to implement it as follow:
listener = stream!.transform(MicStream.toSampleStream).listen(
(dynamic sample) {
if (sample is! List
final double leftSample = sample[0].toDouble();
final double rightSample = sample[1].toDouble();
leftChannelSamples.add(leftSample);
rightChannelSamples.add(rightSample);
);
But, it did not help. I think I need to retrace my code again, I got lost somewhere.
yes, a tuple. Which element is which mic depends on the system. Do you at least get the same values as before? If so, at least your algorithm is correct
Hello again. I hope you are happy to continue our discussion. I tried your suggestion using streamTransform and found that one channel produces a sine wave with about 48 samples per cycle, which is consistent with a 1000Hz tonal sound. The other channel, however, only shows small valued random values.
I am pretty sure that both microphones are responding to input changes. When I blocked the top microphone hole, near the ear, I could see a signal drop in the corresponding input line. A similar situation was observed when I blocked the bottom microphone hole, near the mouth. But, I will try my code on other Android smartphone models, just to be sure.
By the way, it would be very helpful if you can share a short code to check a few samples from left and right audio inputs. A simple text output to console using debugPrint() is satisfactory, no graphic or other fancy buttons are necessary. Thank you in advance.
I'm not sure I got everything right.
When you block the top microphone, you see a drop in the previously sinusoidal wave, right? What exactly do you see when you block the lower one? Is there a visible change in the random values, or does it affect the sinus wave as well?
For your last part, what does "check" mean? Do you want an example of how to work with the dynamic stream from the transformer? Or do you want a model to check whether the audio looks correct?
That aside, I should find some time next weekend to give it a look myself. Maybe I can also find the time to update the example app to display stereo input instead of just one curve.
Using fl_chart package, I plot left and right microphone inputs simulatenously. One plot represents a sinewave, the other one is random valued. When I block the top microphone, the sinewave plot is getting small, and when I block the bottom microphone, the random plot is getting small.
What I meant with "check" is to find out the values of a few samples from left and right channels, for example:
Stream<Uint8List> microphoneStream = MicStream.microphone(
audioSource: AudioSource.MIC,
sampleRate: 48000,
channelConfig: ChannelConfig.CHANNEL_IN_STEREO,
audioFormat: AudioFormat.ENCODING_PCM_16BIT,
);
late StreamSubscription listerner;
listerner = microphoneStream.transform(MicStream.toSampleStream).listen(
(dynamic sample) {
debugPrint('Stream transform value: $sample');
}
);
I think a short code that prints a few data samples is already good. I can use it to indicate whether the audio inputs are correct or wrong. Yet, if you could update the example app to display a stereo input, that would be great.
I look forward to hearing from you again. Thank you so much.
PS. I run the example above. Using the streamTransform, the returned value is a tupple, i.e., variable sample has two numerical values. However, I am not sure if the way to code in the example above is correct.
you should be able to do something like
.listen((dynamic sample)
debugPrint('top: {sample.$1}, bottom: {sample.$2}')
);
I got an syntax error using
.listen((dynamic sample)
debugPrint('top: {sample.$1}, bottom: {sample.$2}')
);
but, it went well with
.listen((dynamic sample) {
debugPrint('check it: $sample');
}
the console outputs are like these (not exposed to tonal noise):
I/flutter (24750): check it: (-5888, -257)
I/flutter (24750): check it: (-8705, -1537)
I/flutter (24750): check it: (-2049, 3327)
I/flutter (24750): check it: (256, -512)
I/flutter (24750): check it: (5887, -3584)
I/flutter (24750): check it: (2559, 4608)
I/flutter (24750): check it: (7424, 0)
I/flutter (24750): check it: (11776, -2816)
I/flutter (24750): check it: (8703, 1024)
I/flutter (24750): check it: (15616, -512)
I tried the code to other Android smartphone, which is a Samsung S21 model. The result was good: I can see two sinewaves in the plots. So the issue is related with the hardware. I am not sure if one microphone in the other model is flawed, or there is a register flag in device that control signal route for each microphone. So, the code is working. Thank you for your support.
Here are two segments of stream output: I/flutter (25443): Right Channel Samples: [-209, -58, 83, 259, 406, 536, 660, 796, 942, 1043, 1112, 1192, 1256, 1299, 1321, 1298, 1283, 1276, 1229, 1125, 1036, 935, 851, 733, 579, 434, 289, 175, 45, -123, -269, -370, -467, -571, -682, -761, -790, -814, -824, -833, -831, -779, -718, -596, -548, -459, -311, -173, -62, 75, 215, 387, 526, 660, 777, 913, 1022, 1127, 1233, 1268, 1317, 1360, 1388, 1352] I/flutter (25443): Left Channel Samples: [-592, -481, -373, -234, -107, 13, 139, 258, 352, 456, 554, 639, 679, 727, 779, 774, 782, 769, 731, 676, 600, 551, 431, 337, 227, 108, -14, -139, -289, -422, -537, -666, -779, -879, -972, -1054, -1123, -1164, -1214, -1219, -1232, -1232, -1187, -1121, -1101, -1017, -928, -836, -695, -586, -475, -348, -218, -88, 43, 121, 257, 371, 465, 545, 597, 621, 665, 680]
I would like to use mic_stream to capture stereo audio input. I used the following parameters:
and the following data conversion:
and the following channel assignment:
I run the code and exposed the microphones to 1kHz tonal sound. I got a correct signal from the rightChannelSamples and wrong signal at the leftChannelSamples. Shown below is an example data set consisting two typical data segments: Left: [12, 12, 11, 12, -9, 14, 11, -25, 7, 9, -14, 10, -5, -17, 24, -7, 4, 13, -15, 32, 7, -15, 18, 22, -7, 24, 18, 7, 32, 15, 11, 33] Right: [2268, 2234, 2741, 2923, 2830, 3189, 3222, 3256, 3133, 3061, 3037, 2752, 2451, 2226, 1844, 1438, 1149, 725, 256, -76, -510, -925, -1305, -1486, -2008, -2295, -2300, -2613, -2849, -2717, -2854, -3052]
For your information, I checked both microphones are working well. I also tried other data conversion, for example using Int8List and assign leftChannelSamples as byte1256+byte2 and rightChannelSamples as byte3256+byte4, but the result persisted. Please help me to understand the problem: whether the parameters are wrong, or the data conversion is wrong, or the data assignment is wrong. It would be very helpful if you can share a short working example. Thank you very much.