earlephilhower / ESP8266Audio

Arduino library to play MOD, WAV, FLAC, MIDI, RTTTL, MP3, and AAC files on I2S DACs or with a software emulated delta-sigma DAC on the ESP8266 and ESP32
GNU General Public License v3.0
1.97k stars 430 forks source link

How to play runtime generated raw audio stream? #405

Open valioiv opened 3 years ago

valioiv commented 3 years ago

Hello,

I want to use AudioOutputI2S class to play on a codec chip randomly generated wavelets runtime or even receive them over UDP as PCM16 audio packets of audio samples. I know I should be using AudioGeneratorWAV (not fully sure because it expects header which I don't have) and AudioFileSourceBuffer classes but I do not understand how exactly to feed them with whatever I'll received over UDP...

Can some give me some hints? Any code examples are highly appreciated. I've taken a look on this, but don't understand how to make it similarly with UDP instead of HTTP stream:

https://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1338&context=cpesp

#include <Arduino.h>
#include "AudioGeneratorMP3.h"
#include "AudioOutputI2S.h"
#include "AudioFileSourceHTTPStream.h"
#include "AudioFileSourceBuffer.h"
#include <WiFi.h>
// Define our WiFi SSID and Password
const char* ssid = "xxxxxxx";
const char* password = "xxxxxxx";
// Declare our MP3 audio generator, MP3 file, the HTTP buffer, and the I2S audio sink
AudioGeneratorMP3 *mp3;
AudioFileSourceHTTPStream *file;
AudioFileSourceBuffer *buff;
AudioOutputI2S *out;
void setup()
{
    Serial.begin(115200);
    delay(1000);
    Serial.print("Connecting to ");
    Serial.println(ssid);
    // Attempt to connect to WiFi
    WiFi.begin(ssid, password);
    // Print “..........” until connected
    while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print(".");
    }
    // Once connected, initialize our Audio stream (the below IP address is my Raspberry Pi with IceCast)
    file = new AudioFileSourceHTTPStream("http://192.168.86.243:8000/test.mp3");
    buff = new AudioFileSourceBuffer(file, 2048);
    out = new AudioOutputI2S();
    mp3 = new AudioGeneratorMP3();
    // Pass the buffer as the MP3, not the http stream to enable buffering
    mp3->begin(buff, out);
}
void loop()
{
    // While the server is still running, handle it and print MP3 bugging
    if (mp3->isRunning()) {
        Serial.printf("MP3 Running\n");
        Serial.printf(mp3);
        if (!mp3->loop()) mp3->stop();
    } else {
        // If the stream is over, wait until it starts again
        Serial.printf("MP3 done\n");
        delay(1000);
    }
}
earlephilhower commented 3 years ago

If you just want to play a stream of bytes, there's not much reason to use this library. Just use the native I2S class or the i2s_xxx SDK routines to write data that you read from a UDP Server/Client.

If you need the NoDAC version of the output, just use the NoDAC object instead of the Arduino I2S, using the ConsumeSamples call instead of write()

Check the AudioOutput.h header for more info. Good luck!

valioiv commented 3 years ago

If you just want to play a stream of bytes, there's not much reason to use this library. Just use the native I2S class or the i2s_xxx SDK routines to write data that you read from a UDP Server/Client.

Actually, I don't have choice to use anything else for audio or anything from a different level of abstraction like directly I2S driver because the codebase I'm currently extending is already well coupled with ESP8266Audio lib. Along with that I like the idea to have different types of streams, files, formats, etc. managed in a common way and this is another argument in plus for ESP8266Audio lib.

If you need the NoDAC version of the output, just use the NoDAC object instead of the Arduino I2S, using the ConsumeSamples call instead of write()

If NoDAC means to generate PWM on a ESP32 pin then no, I don't need NoDAC. I've separate audio codec on the I2S lines and I want to use AudioOutputI2S class itself. Actually it works pretty well when playing WAV from the flash by using AudioGeneratorWAV class + AudioFileSourcePROGMEM class. I also use ESP8266SAM class for speech synthesis which uses the same AudioOutputI2S object. So I just want to extend the functionality for realtime generated/network-sourced data by using the same ESP8266Audio lib based "framework" that I currently have.

Please @earlephilhower , as a designer of ESP8266Audio lib, suggest me which is easiest approach in the context of ESP8266Audio lib!

earlephilhower commented 3 years ago

Just use AudioOutputI2S and its config calls to set the bitrate, etc. Then use ConsumeSamples in your UDP receive function and that should do it (assuming no buffering issues).

valioiv commented 3 years ago

Just use AudioOutputI2S and its config calls to set the bitrate, etc. Then use ConsumeSamples in your UDP receive function and that should do it (assuming no buffering issues).

Okay, ConsumeSamples() in AudioOutputI2S might work but not exactly in my case. I'm using it but the audio sounds "choppy". I guess it is because the ConsumeSamples (default) implementation is directly in the base class AudioOutput done by just invoking the "single-sample" method ConsumeSample in a loop which consumes too much CPU in my opinion. I think ConsumeSamples there must be implemented in the specific classes (likeAudioOutputI2S) in order to use platform specific DMA mechanisms for acceleration. Like it is i2s_write() for ESP32.

@earlephilhower , what do you think about that?