Closed ggfan closed 6 years ago
Yes. I want to have ability to change delay. I try that repo https://github.com/googlesamples/android-audio-high-performance but it doesn't compile.
which one you tried: oboe, or aaudio one?
Hello alexd555,
By "change delay" do you mean that you want to create an echo effect? Or do you want to decrease the audio latency?
Phil Burk
On Dec 23, 2017 12:30 AM, "alexd555" notifications@github.com wrote:
Yes. I want to have ability to change delay. I try that repo https://github.com/googlesamples/android-audio- high-performance but it doesn't compile.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/googlesamples/android-audio-high-performance/issues/92#issuecomment-353714071, or mute the thread https://github.com/notifications/unsubscribe-auth/AE76aXAVeDF9ZKyTMAXaLLSd3bBx0YAdks5tDLoMgaJpZM4RLcuP .
Yes. I want to increase or decrease latency. I also want to create an echo effect
The way that you decrease latency depends on the API that you are using. We recommend using Oboe. What are you using?
To create an echo effect is not Android specific. You can find lots of references online. In essence, you create an array of samples and do something like this:
output = delayLine[cursor];
delayLine[cursor] = input;
cursor++;
if (cursor >= SIZE_DELAY_LINE) cursor = 0;
Phil Burk
On Sat, Dec 23, 2017 at 9:37 AM, alexd555 notifications@github.com wrote:
Yes. I want to increase or decrease latency. I also want to create an echo effect
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/googlesamples/android-audio-high-performance/issues/92#issuecomment-353738731, or mute the thread https://github.com/notifications/unsubscribe-auth/AE76aQQ7-ldAQWXSDpUzwX4nfyLunEBQks5tDTptgaJpZM4RLcuP .
What is oboe ? If you mean " https://github.com/googlesamples/android-audio-high-performance" then I get error: Error while executing process E:\Android\sdk\cmake\3.6.4111459\bin\cmake.exe with arguments
Oboe is a C++ wrapper for AAudio and OpenSL ES. It is recommended if you want to write audio code in native C++.
https://github.com/google/oboe
If you want to write audio code in Java then we recommend using the AudioTrack and AudioRecord Java classes.
On Sat, Dec 23, 2017 at 1:41 PM, alexd555 notifications@github.com wrote:
What is oboe ? If you mean " https://github.com/googlesamples/android-audio- high-performance" that I get error: Error while executing process E:\Android\sdk\cmake\3.6.4111459\bin\cmake.exe with arguments
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/googlesamples/android-audio-high-performance/issues/92#issuecomment-353749780, or mute the thread https://github.com/notifications/unsubscribe-auth/AE76afwUM1XDspHhLygRL0Xr5RV3eq2aks5tDXOdgaJpZM4RLcuP .
I have java code but I have 150ms latency on nexus 4 or 5 but I need below 100ms. Problem of java code is that minimum buffer size depend on sample rate. If I try to change buffer to 96,128,256,512 then I get same result
private static final int SAMPLE_RATE = 8000;
private static final int BUF_SIZE = 128;
short[] audioBuffer = new short[BUF_SIZE];
AudioRecord record = new AudioRecord(MediaRecorder.AudioSource.MIC,
SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
BUF_SIZE);
record.startRecording();
long shortsRead = 0;
audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC,
SAMPLE_RATE,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
BUF_SIZE,
AudioTrack.MODE_STREAM);
audioTrack.play();
while (mShouldContinue) {
int numberOfShort = record.read(audioBuffer, 0, audioBuffer.length);
if (numberOfShort> 0) {
audioTrack.write(audioBuffer, 0, numberOfShort);
}
}
audioTrack.stop();
audioTrack.release();
record.stop();
record.release();
Hello Alex,
To minimize latency, you want to use the devices natural sample rate and burst size:
String text = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER); int framesPerBurst = Integer.parseInt(text); text = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE); int sampleRate = Integer.parseInt(text);
The framesPerBurst is the size you should use for your audioBuffer length. The buffer that you allocate for the AudioRecord and AudioTrack should be bigger, and a multiple of this burst size.
And choose LOW_LATENCY mode. https://developer.android.com/reference/android/media/AudioTrack.html#PERFORMANCE_MODE_LOW_LATENCY
You should also fully drain the AudioRecord for the first second when doing the echo. Otherwise you end up with extra data left in the AudioRecord, which adds latency. After that the record and playback buffers will be in sync and you can just use the loop you have now.
You can also tune the output buffer latency if you are on N or later.
https://developer.android.com/reference/android/media/AudioTrack.html#setBufferSizeInFrames(int)
Phil Burk
On Sat, Dec 23, 2017 at 8:57 PM, alexd555 notifications@github.com wrote:
I have java code but I have 150ms latency on nexus 4 or 5 but I need below 100ms. Problem of java code is that minimum buffer size depend on sample rate. If I try to change buffer to 96,128,256,512 then I get same result
AudioRecord record = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLE_RATE, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, BUF_SIZE); record.startRecording(); long shortsRead = 0; audioTrack = new AudioTrack( AudioManager.STREAM_MUSIC, SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, BUF_SIZE, AudioTrack.MODE_STREAM); audioTrack.play(); while (mShouldContinue) { int numberOfShort = record.read(audioBuffer, 0, audioBuffer.length); if (numberOfShort> 0) { audioTrack.write(audioBuffer, 0, numberOfShort); } } audioTrack.stop(); audioTrack.release(); record.stop(); record.release();
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Thank You. Could you give more specific example ? I want to have control on latency between 50-150ms. Besides, this code not compiled audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER); How can I create audioManager ?
Oops. Do this first. AudioManager audioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
Could you give more specific example ? I want to have control on latency between 50-150ms.
You don't have direct control over latency. It will vary from device to device. You best bet is to reduce latency to the minimum and then add a delay effect if you want an echo.
You can estimate the actual latency by calling getTimeStamp() and comparing the time that frames are presented to the time that you write them.
We can give more examples during the week when folks are back to work.
Phil Burk
On Sun, Dec 24, 2017 at 1:21 PM, alexd555 notifications@github.com wrote:
Thank You. Could you give more specific example ? I want to have control on latency between 50-150ms. Besides, this code not compiled audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER); How can I create audioManager ?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Could you explain what meaning of framesPerBurs ? Do you mean that buffer size of 240 samples can be better than 96 samples ? How can I run "AudioManager audioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE)" from class (not MainActivity) ? or pass this to another class from MainActivity ? What do you mean "You should also fully drain the AudioRecord for the first second when doing the echo" ? Can you give example ?
Hi, Could you answer on my questions ? Thanks, Alex
Google is on Holiday Dec 25 & 26, and Dec 29 and Jan 1.
On Tue, Dec 26, 2017 at 12:28 AM, alexd555 notifications@github.com wrote:
Hi, Could you answer on my questions ? Thanks, Alex
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/googlesamples/android-audio-high-performance/issues/92#issuecomment-353938152, or mute the thread https://github.com/notifications/unsubscribe-auth/AE76aQ5z2_ANsdTXZCT4bgBDTM-DrQZKks5tEK4lgaJpZM4RLcuP .
@alexd555 Phil is explaining to you the way to get the device's native sample rate and buffer size. you could also look at this code here: https://github.com/googlesamples/android-ndk/blob/master/audio-echo/app/src/main/java/com/google/sample/echo/MainActivity.java#L138 In the recent OSes, there is an audio fast path when your application uses those "native" parameters( no resampling inside audio framework or audio driver, and no buffer management overhead when passing audio samples to lower layer stacks if your audio buffer size is the same as framesPerBurst, DMA transferring is in burst ).
for Oboe, you link is correct, the sample is
https://github.com/googlesamples/android-audio-high-performance/tree/master/oboe
The youtube stream is here.
Oboe homepage is on github ( source and doc ), I believe the Oboe sample would automatically download and build Oboe lib.
@alexd555 to build the Oboe sample, need to do a small modification in CMakeLists.txt:
1) clone Oboe repo to somewhere on your system first
2) tell CMakeLists.txt where your oboe directory is with:
set (OBOE_DIR to-your-local-oboe-repo-directory)
then it should be able to build. we should probably add this into README.md and add scripts to auto-clone oboe repo if not in certain directory. will sync-up with Dontuner, thanks for trying it out!
Hi Alex,
I strongly suggest reading this guide: https://developer.android.com/ndk/guides/audio/audio-latency.html
Also check out the hello-oboe sample: https://github.com/googlesamples/android-audio-high-performance/tree/master/oboe
Lastly, if you want the lowest possible latency you'll need to write your code in C++. Java latency will be higher on all devices by a minimum of ~20ms.
Specific answers below:
Could you explain what meaning of framesPerBurst?
framesPerBurst is the number of audio frames which are read by the audio device during its read operation. This read operation happens every few milliseconds.
For example, if the framesPerBurst was 192 frames and the audio device's sample rate was 48000 samples per second the audio device would perform a read operation every 4ms because 192/48000 = 0.004 seconds.
The same term is applied to write operations as well, except that the audio device supplies frames of audio data to the app (e.g. from the phone's built-in microphone).
framesPerBurst typically represents the lowest possible latency for that audio device, since it cannot read or write in any smaller burst sizes. Common values for framesPerBurst are: 96, 192 and 240.
Do you mean that buffer size of 240 samples can be better than 96 samples ?
Each audio device has a fixed framesPerBurst size. You should supply data (typically an array of floats or 16-bit integers) of this specific size. https://developer.android.com/ndk/guides/audio/audio-latency.html#output-latency
How can I run "AudioManager audioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE)" from class (not MainActivity) ? or pass this to another class from MainActivity ?
getSystemService
is a method of Context
. MainActivity
is a subclass of Context
. If you wish to call that method from another Java class you will need to pass this
to your Java class (e.g. in the class constructor).
What do you mean "You should also fully drain the AudioRecord for the first second when doing the echo" ? Can you give example ?
When you start reading data from an input audio device (e.g. a microphone) its data buffer may be full. This means you'll be reading the oldest data first which will result in sub-optimal latency. To read the newest data you should clear the audio device's data buffer, this can be done by performing a few successive reads (data can be discarded) which drains the buffer, allowing you to access the latest data from the input audio device.
Can you give example with audio effects like HPF and WSOLA algorithm ? for example , I need to save last samples before convolution.
I created HPF in MATLAB and added code to https://github.com/googlesamples/android-ndk/tree/master/audio-echo/app/src/main/cpp
double filter[]={-0.000271644477087926,0.00231682081656234,-0.00697762679239381,0.0113431078907557,-0.00922785019978973,-0.000953863100870534,0.00985593896141045,-0.00555013199491495,-0.00869288595337110,0.0130688967200457,0.00298808043964922,-0.0197522326251256,0.00894497147708418,0.0214370330423526,-0.0266004895962514,-0.0129511674987413,0.0473297694426478,-0.0123916710107896,-0.0669754812188437,0.0708843498739455,0.0810696191193916,-0.305755100930970,0.413804336148547,-0.305755100930970,0.0810696191193916,0.0708843498739455,-0.0669754812188437,-0.0123916710107896,0.0473297694426478,-0.0129511674987413,-0.0266004895962514,0.0214370330423526,0.00894497147708418,-0.0197522326251256,0.00298808043964922,0.0130688967200457,-0.00869288595337110,-0.00555013199491495,0.00985593896141045,-0.000953863100870534,-0.00922785019978973,0.0113431078907557,-0.00697762679239381,0.00231682081656234,-0.000271644477087926};
double convolution(uint8_t x,int len_signal, double filter, int len_filter) { double y=new double[len_signal+len_filter]; int length = len_signal+len_filter; for (int n=0;n<length;n++) { y[n]=0; for(int k=0;k<length;k++) { if (n-k >=0 && n-k <length) y[n] += x[n-k]filter[k]; } } return y; } void ece420ProcessFrame(sample_buf dataBuf) {
dataBuf->buf_= (uint8_t *) convolution(dataBuf->buf_, dataBuf->size_, filter, 45);
}
Can you help me, please ?
Thanks, Alex
Hi, Could you help me with implantation HPF in Real Time ? I want to add it to NDK audio echo example. Thanks, Alex
This site is for Android audio specific programming. A High Pass Filter, or WSOLA, or delay techniques are not Android specific. I suggest doing a Google search. Start with "RBJ filter cookbook". Good luck!
Great doc Phil. For reference, here it is: http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt
audioManager = (AudioManager) this.getSystemService(Context.AUDIO_SERVICE); audioManager.setMode(AudioManager.MODE_IN_COMMUNICATION); audioManager.setSpeakerphoneOn(true); requestAudioPermissions();
(new Thread()
{
@Override
public void run()
{
recordAndPlay();
}
}).start();
startButton = findViewById(R.id.start_button);
startButton.setOnClickListener(new View.OnClickListener()
{
@Override
public void onClick(View v)
{
if (!isRecording)
{
initRecordAndTrack();
startRecordAndPlay();
}
}
});
stopButton = findViewById(R.id.stop_button);
stopButton.setOnClickListener(new View.OnClickListener()
{
@Override
public void onClick(View v)
{
stopRecordAndPlay();
}
});
}
public void initRecordAndTrack()
{
// int min=9000;
min = AudioRecord.getMinBufferSize(48000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioRecord =new AudioRecord(MediaRecorder.AudioSource.VOICE_COMMUNICATION,16000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT,min);
if (AcousticEchoCanceler.isAvailable())
{
AcousticEchoCanceler echoCancler = AcousticEchoCanceler.create(audioRecord.getAudioSessionId());
echoCancler.setEnabled(true);
}
int maxJitter = AudioTrack.getMinBufferSize(16000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.MODE_IN_COMMUNICATION, 16000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, maxJitter,
AudioTrack.MODE_STREAM);
}
private void recordAndPlay()
{
final short[] lin = new short[1024];
final int[] num = {0};
audioManager.setMode(AudioManager.MODE_IN_COMMUNICATION);
int bufferSize = 0;
while (true)
{
if (isRecording)
{
bufferSize++;
Log.i("recording::","continue...");
num[0] = audioRecord.read(lin, 0, 1024);
audioTrack.write(lin, 0, num[0]);
}
}
}
private void startRecordAndPlay()
{
audioRecord.startRecording();
audioTrack.play();
isRecording = true;
}
private void stopRecordAndPlay()
{
audioRecord.stop();
audioTrack.pause();
audioTrack.release();
isRecording = false;
}
i have try this .... i want to add custome delay like 2 second, 3 second and more,,, how to acheive this ???
this is to move question from Android ndk sample: https://github.com/googlesamples/android-ndk/issues/476
I want to change delay. for example, by changing sample rate or buffer size. How can I do it ?