whitphx / streamlit-webrtc

Real-time video and audio streams over the network, with Streamlit.
https://discuss.streamlit.io/t/new-component-streamlit-webrtc-a-new-way-to-deal-with-real-time-media-streams/8669
MIT License
1.34k stars 181 forks source link

Programmable audio source #1358

Open whitphx opened 1 year ago

whitphx commented 1 year ago

I created the video source (#1349), but didn't the audio version because

If there are someone want it, please leave a comment in this issue, and also teach me how to generate audio data to feed to PyAV library.

wenshutang commented 12 months ago

hello, thanks for the great work. I am interested in the audio source. My use case is that I would like to stream audio to the client without the player component. Do you have any pointers on how that can be done?

wenshutang commented 11 months ago

I have a rough version of AudioStreamTrack, a variation of your implementation and aiortc's implementation

class AudioStreamTrack(MediaStreamTrack):
    """
    A dummy audio track which reads silence.
    """
    def __init__(self, callback: AudioSourceCallback) -> None:
        super().__init__()
        self.kind = "audio"
        self._callback = callback
        self._started_at: Optional[float] = None
        self._pts: Optional[int] = None

    # _start: float
    # _timestamp: int

    async def recv(self) -> av.frame.Frame:
        print('sending some stuff')
        if self.readyState != "live":
            raise MediaStreamError

        sample_rate = 8000
        samples = int(AUDIO_PTIME * sample_rate)

        if hasattr(self, "_timestamp"):
            self._timestamp += samples
            wait = self._start + (self._timestamp / sample_rate) - time.time()
            print(wait)
            await asyncio.sleep(wait)
        else:
            self._start = time.time()
            self._timestamp = 0

        frame = AudioFrame(format="s16", layout="mono", samples=samples)
        for p in frame.planes:
            p.update(bytes(p.buffer_size))
        frame.pts = self._timestamp
        frame.sample_rate = sample_rate
        frame.time_base = fractions.Fraction(1, sample_rate)

        return frame

    def _call_callback(self) -> av.AudioFrame:
        try:
            frame = self._callback()
        except Exception as exc:
            logger.error(
                "AudioSourceCallbackTrack: Audio frame callback raised an exception: %s",  # noqa: E501
                exc,
            )
            raise
        return frame

Seems to be working, hard to tell since it's suppose to be silent.

However, I'd like to send an audio for speech to text, then receive an audio track. Basically fully duplex audio, each track with their respective handlers. Here is an example webrtc object.

webrtc_streamer(
    key="player",
    mode=WebRtcMode.SENDRECV,
    audio_frame_callback=process_audio,
    source_audio_track=AudioStreamTrack(audio_source_callback),
    media_stream_constraints={"video": False, "audio": True},
    on_change=on_change,
    async_processing=True
)

Only the source audio track is played. I am not able to process the received audio frames process_audio. What's the distinction between source_audio_track and audio_frame_callback.

WaterKnight1998 commented 2 months ago

@whitphx I need this feature to reproduce audio that is returned in streaming from an http endpoint, I am converting requests.post streaming response into AudioStreamTrackand then I am trying to reproduce as follows:

            webrtc_ctx = webrtc_streamer(
                key="audio",
                mode=WebRtcMode.SENDONLY,
                source_audio_track=StreamingAudioStreamTrack(text),
                media_stream_constraints={"video": False, "audio": True},
                async_processing=True,
            )