Open ashald opened 4 years ago
By just glancing at the output, the Apple TV doesn't seem to recognize the stream and start playing it. It is really hard to debug as no feedback/error message is provided back, so can't really tell why. But, my guess is that RTSP is not supported via this method. It is a different subset of AirPlay, as documented here:
https://nto.github.io/AirPlay.html#audio
I guess that MP4 video is basically the safest bet. Can you transcode the audio and stream it via an MP4 container instead? Maybe there's a dummy video generator that produces a black image or so?
Thanks for a super quick reply! Tried with it with mp4 with mp3 and AAC codecs and neither worked. I guess I'll have to abandon this idea for now. Thanks for the help!
No problems! I know that it is quite picky when it comes to how media is served. When I was prototyping #95 ages ago, serving a demo clip (buck bunny IIRC) using the aiohttp webserver didn't work but the exact same clip worked when served from an apache webserver. So I guess support for something was missing but I don't know what. Maybe that can be a lead somewhere?
Been reading up a bit and I don't think RTSP is supported this way. MP4 (even MP3) and live http streaming should work fine. But for RTSP I do believe the RTSP part of AirPlay must be used:
https://emanuelecozzi.net/docs/airplay2/rtsp
It looks like Apple has made some changes to it (why wouldn't they...) so most likely the audio must be relayed via the player, i.e. pyatv
in this case. This would be a lot of work, which means something I won't support for now.
Can't really drop this, since the format limitation is irritating. One thing I was thinking about was to do something similar to how I implemented support for playing local files, I.e. spinning up a local web server within pyatv
and serve the file from there. In this case I would use ffmpeg
and transcode whatever input is given to something that the Apple TV accepts (and point the Apple TV to play from the ffmpeg server). This will of course introduce additional CPU load, additional latency, complexity and an endless amount of parameters that people would want to fiddle with. But it could be a step forward towards support of many more formats and I would probably make it opt-in to use. There are some python bindings for ffmpeg, not sure if they support what I need, but could be investigated.
How would you feel about such a solution?
That sounds great to me - I mean, given all the circumstances!
For now, I found a workaround to use forked-daapd
to stream the audio but the connection is dropped sometimes and I had to put around crunches to restart the service and reconnect each time I detect it's down. As I mentioned originally, in nutshell, the use-case here is to stream audio, similarly to how one can stream video with pyatv
. From your explanations, my understanding is that one can assemble a solution based on ffmepg
and realtime transcoding, but it would've been amazing if pyatv
would've supported this out of the box.
I am certainly not an ffmpeg expert, but I do know that you can do a lot of crazy voodoo magic with it. So it should certainly be possible. This page has a lot of examples:
https://trac.ffmpeg.org/wiki/StreamingGuide
In the end I would need something that works with both audio and video. Since ffmpeg has to be installed separately, some nice handling of when it's missing needs to be in place.
My goal would be to make it as simple to stream anything that ffmpeg can transcode as what is natively supported. So it should be transparent to the end user. If you have some time over and feel like experimenting, it would help a lot to have a working example, I.e. an ffmpeg command that does the said thing, which works to stream to the Apple TV by manually inputting a URL to the ffmpeg server.
@ashald Just wanted to check if this is still relevant for you? Since it's possible to stream audio via RAOP/AirTunes now, maybe I can make some modifications to allow a stream from stdin. In that case, you should be able to take in your stream via cvlc, ffmpeg or anything else and just pipe it to atvremote
and have it play.
@ashald So, I prototyped this a bit yesterday and I think I have something that can play audio from a buffer, e.g. stdin. So you can pass audio like this (assuming the audio doesn't require seeking, see documentation in PR):
$ ffmpeg -i sample.wav -f mp3 - | atvremote -s 10.0.10.194 --debug stream_file=-
It's of course possible to write a small script that does this, without having to call ffmpeg manually. But this is the easiest most general way. If you have some time to try it out, that would be great!
I had to make a minor fix to get timing right (it was a bug), but now I'm able to stream and re-code audio via RTSP in ffmpeg and pass it to atvremote
like this (verified with AirPort Express and HomePod Mini):
$ ffmpeg -i rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov -f mp3 - | atvremote --debug -s 10.0.10.194 stream_file=-
And it plays nicely! It is however worth noting that you get a buffer delay from ffmpeg and additionally about 1,5s delay from AirPlay, so it might take some time until there's output.
I noticed that if the source blocks, e.g. due to buffering, then there will be artifacts in the output. That's logical since we don't send any packets when we are expected to. So, I need to put a small buffer into module dealing with streaming the audio to fix this. The idea is to maintain a small buffer that is immediately available to grab data from when reading frames. A background task will be responsible for filling the buffer when it's getting low. This will allow some minor hiccups without artifacts. If the source blocks long enough so that we run out of buffered frames, then there will be silence and the audio will continue once there are more frames available.
Hi @postlund - thanks for looking into this!
I ended up setting up forked-daapd
to solve my problem, but what you described above is insanely cool. If I were re-doing my setup from scratch I'd definitely use atvremote
instead for its simplicity.
@ashald Great that you managed to get it working! Owntone (formerly forked-daapd) is cool project and works very we'll once it's up and running! So stick with it 😊
I'll try to finish up my buffering implementation by next week and merge it, it's interesting to work with as I have never touched anything remotely close to streaming in any way. So I'm learning a lot!
Describe the bug I have an audio stream on my Raspberry Pi (from a record player) that I want to stream to my AppleTV with
pyatvb
and it doesn't work. Not sure if there is an issue onpyatv
or I'm doing something wrong/trying to do something that is not supported.To Reproduce The easiest way to reproduce the streaming setup is:
In reality I the input is an Alsa audio device but an mp3 file gives the same effect.
Once that's running I'm trying to run the following:
The AppleTV screen goes dark for few moments as though it's trying to start playback (with a visible spinner) and then it "crashes" back to the main menu.
Expected behavior Apple TV plays audio from the stream. When I open the RTSP stream from another host it play normally.
System Setup (please complete the following information):
Raspbian GNU/Linux 9 (stretch)
3.5.3
0.4.0a12
Additional context Debug output: