jellyfin / jellyfin-chromecast

Chromecast Client for Jellyfin
https://jellyfin.org
GNU General Public License v2.0
132 stars 41 forks source link

Casting from Jellyfin server from behind reverse proxy #110

Open SteveDinn opened 3 years ago

SteveDinn commented 3 years ago

This isn't really a bug, but a call for advice. I logged this bug in the main Jellyfin repo and got a depressing answer. I had also logged a feature request that I linked to from a comment on that bug, that probably has a very slim chance of getting implemented.

https://github.com/jellyfin/jellyfin/issues/4917

Can anyone suggest a course of action for me that will allow me to cast locally-hosted media from Jellyfin?

TL/DR: Chromecasts have hard-coded DNS servers, and are about to deprecate playing media from non-HTTPS sources, and my ISP doesn't allow NAT loopback, so I can't play my locally-hosted media on my Chromecasts.

BloodyIron commented 11 months ago

@jameskimmel I recommend you read the latest info here : https://github.com/jellyfin/jellyfin-chromecast/pull/107

More specifically this comment: https://github.com/jellyfin/jellyfin-chromecast/pull/107#issuecomment-1762027250

jameskimmel commented 11 months ago

@BloodyIron Thanks for remembering me. But right now I don't encounter any problems. I don't use Chromecast locally and it works from remote (at least from Android to Chromecast, nut sure about iPhones). I have no idea what the current status of this bug is.

BloodyIron commented 11 months ago

@jameskimmel yeah for me casting off-LAN was working just fine, the issue was on-LAN for me.

jameskimmel commented 11 months ago

@BloodyIron Have you opened an issue fot that problem? My guess is that your problem has to do with DNS. DNS should be the only thing that is different from WAN to LAN. I think you need override your local DNS to handle out the JellyfinServerIP (192.168.1.10) instand of you WAN IP (89.319.12.13). Then you would also have to block 8.8.8.8 and 8.8.4.4 so the Chromecast clients can not ask them. Probably even better would be to use IPv6 instead of IPv4 so you don't have to bother with NAT. That way the IPv6 address from Jellyfin is the same, no matter if on LAN or WAN and you don't have to do DNS overrides and even 8.8.8.8 will work. But you would also have to disable IPv4 or hope that Jellyfin and Chromecast always prefere IPv6 over IPv4. Sorry if this is getting off topic. I just think the NAT issue is something a lot of people forget when troubleshooting Chromecast.

BloodyIron commented 11 months ago

The DNS aspect is actually already (partially) in the Jellyfin official documentation (the 8.8.4.4 server is absent mind you). So this matter is well-known already. The NS' I have on my LAN already serve the LAN IP when queried (hence the DNS overrides actually being part of the picture that gets results). In the aforementioned linked 107 issue thread (linked above) I already talk about the DNS blocking details needed for success. There is no IPv6 active in the environment at all, so that's not relevant.

I am not going to ever disable IPv4, the whole network relies on that, that's an absurd recommendation frankly.

The problems I've had has nothing to do with NAT at all.

At this point the crux of the solution boils down to:

  1. Block DNS 8.8.8.8 and 8.8.4.4 for Chromecast device(s)
  2. Use the unstable Chromecast app build for Jellyfin (which was only VERY recently actually made available as there were blocked fixes to the code)

Everything else I've already exhaustively evaluated over many months (most of this year).

I think at this point I'm going to un-sub to this thread.

jameskimmel commented 10 months ago

@BloodyIron Sorry, there seems to be a misunderstanding. English is not my native language, I probably have not explained myself very well. Also I am no longer involved in using Chromecast, so maybe I have outdates assumptions about the current state. Last time I checked, I understood it like this:

Two things make Chromecast special: The hard coded DNS (just Google things) and that Chromecast needs a valid cert. To my knowledge, you can't change the DNS settings and you Chromecast won't accept self signed certs. So when it comes to play Chromecast on LAN, you are left with two options.

Option 1: Create your own unbound override rule, so that jelly.yourdomain.com does not point to your WAN IP but to your local LAN IP. But because of hardcoded DNS server, you also have to block outgoing 8.8.8.8 and 8.8.4.4 requests and hope it falls back to your local DNS server. Or you could use DNS redirect like you did. You also have to hope, that there will never be a Chromecast Update that implements DoH or DoT otherwise the blocking would start again.

Option 2: Use IPv6.

I am not going to ever disable IPv4 I never ever said you should disable IPv4 and I don't know where you got that idea. If Jellyfin prefers IPv6 over IPv4 like it should, you can run Dual Stack and don't need these local DNS shenanigans.

So again, just so we are not misunderstanding each other, I have no idea what the current status is. When I first commented, external Chromecast did not work for me, despite having a public DNS and despite having a valid cert. No it does, so the problem is solved for me. I have no idea why. I did not change anything.

If I understood you correctly, external works for you but not internal. So I ask myself, what is the difference between external and internal? Public IP and local IP. That is where I am coming from with my suggestions.

But how do we get closer to a solution? Well first of all we have a huge problem even identifying the issue(s). The are a lot of open issues. The other problem is that a lot of people in these issues don't have the network knowledge to understand how Chromecast works and why external and internal are not the same. And to be honest, I am not 100% if you understood the problem. In the other issue you ask

Anyways, at this point I have no way of determining what is making Jellyfin Chromecasting require the DNS blocking I am no dev either. But in my logic, there a lot of things needed to make it work with IPv4. First the App on the phone itself would need to have some kind of logic to determine if it is on a local or remote connection. Or you would need to add the server twice, once with the public and once with the local IP. Then the App could use the local IP to connect to the Chromecast client, so it could send the command to Chromecast "hey, play jelly.yourdomain.com/asdflasfasfd/". Now you need a solution, so that your A record jelly.yourdomain.com is not translated to your WAN IP but your JellyfinServerLanIP. So you see, it is not Jellyfin that requires that blocking but Chromecast itself. I hope that helps you determining why you need DNS blocking. I would even go a step further and say, not only do you need DNS blocking, you also need some kind of unbound override or DNS redirect. Would be interesting to see what workaround emby has found for that.

One first step would be to change the network documentation. In order for Chromecast to work on a non-public routable connection, 8.8.8.8 must be blocked on the Chromecast's Gateway. Blocking 8.8.8.8 on your router is the easiest solution to this problem.

I would argue that making the connection public routable by using IPv6 instead of IPv4 is the easiest solution to this problem.

BloodyIron commented 10 months ago

@jameskimmel

  1. Hey no worries about language there bud! It's all good! :)
  2. I recommend you review the # 107 issue comment I linked above as it actually is me going into detail about the solution in my case: https://github.com/jellyfin/jellyfin-chromecast/pull/107
  3. I have a valid cert and DNS blocking for the (testing) Chromecast device. I have my gateway static (via DHCP) assign the IPv4 address to the MAC address of the (testing) Chromecast device, then redirect DNS requests from that IP to the internet to 127.0.0.1 and then the DNS NameServer on the gateway resolves Jellyfin stuff to the LAN IP. What is probably going on here is when I don't do the DNS blocking the Chromecast likely resolves the WAN IP (public IP) not the LAN IP, as the public records for the Jellyfin FQDN only are set for the WAN IP.
  4. Emby (which I am likely to migrate away from) is also running on my LAN in mostly the same way, but does not (somehow?!?!) required the DNS blocking for Chromecasting.
  5. I think the "disable IPv4" thing might have been a language misunderstanding, don't worry about it ;P
  6. The app doesn't need to be modified in any way to work in this network configuration, as at this point sending to the testing Chromecast device on LAN now works for me (hence posting what it took in that linked # 107 issue).

Unsure if I missed anything... 🤔

jameskimmel commented 10 months ago

Hey @BloodyIron

  1. Perfect :)
  2. I hope you understand why I won't read 100 comments. Everything works for me, I just try to help you guys.
  3. That is not only probably what is going on here but most definitely what is going on here.
  4. That is fascinating, I wonder how Emby can do this. I can't think of a way they can do this without breaking the cert.

But my main takeaway is right now casting works as intended and what you want is more of a feature request and not a bug. I still think that using IPv6 is easier than NAT reflections. Most home routers won't even allow for DNS blocking or NAT reflection. This is why I created this issue here: https://github.com/jellyfin/jellyfin.org/issues/714

BloodyIron commented 10 months ago

@jameskimmel

OOPS! I meant to link to the relevant comment directly, hah sorry about that! Try this : https://github.com/jellyfin/jellyfin-chromecast/pull/107#issuecomment-1762027250

I'm also not doing NAT reflection at all on LAN, it's a DNS override/redirect then the Chromecast (on LAN) connects to the LAN IP directly. The gateway/router firewall/NAT capabilities never comes into the picture. (btw in my case pfSense)

jameskimmel commented 10 months ago

What you do is blocking both Google DNS servers and hope that Chromecast will use your DNS as a fallback. This is not a solution but a whacky workaround in my opinion. A little bit less whacky would be to redirect all outgoing DNS requests to your local DNS server. But either way, these workarounds could break down tomorrow if Google decides to use DoH or DoT.

That is why I think it is so simpler and more future proof to use IPv6 instead.

BloodyIron commented 10 months ago

It's not hope, I can actually watch the states in pfSense and prove that the Chromecast only uses my LAN nameservers. There's no ambiguity here. And it works so it clearly is a solution. But it's not one that's good, hence wanting this to somehow not be a requirement (unsure where best to ask anyways if that's a Jellyfin server issue or elsewhere).

Also consider that the DNS 8.8.8.8 blocking is actually part of the official Jellyfin documentation (and I don't know why 8.8.4.4 is not mentioned as in my testing that one is also required), "...8.8.8.8 must be blocked on the Chromecast's Gateway. Blocking 8.8.8.8 on your router is the easiest solution to this problem." : https://jellyfin.org/docs/general/networking/

I'm not going to switch any of my networking to use IPv6. That's your preference, and for me, it's a complete waste of time as it's speculative about even working, especially when I have evidence in hand that the method I've found (which by the way I'm the only person on the planet to actually confirm this in my k8s/related scenario) actually works.

I don't know if IPv6 is a workable alternative, and maybe it is. But I'm not going to spend any time on that. I have so much on my plate in life as it is that's just not a wise use of my time to any measure.

jameskimmel commented 10 months ago

And it works so it clearly is a solution.

It is a workaround that works for now. But the main issue (NAT) is still not solved.

part of the official Jellyfin documentation

That documentation is outdated, hence the missing 8.8.4.4, hence my open issue.

I don't know if IPv6 is a workable alternative, and maybe it is. But I'm not going to spend any time on that.

I don't know what makes you think that IPv6 is more work or uses more of your time than IPv4, but I will also not try to convince you.

The reality is that Chromecast has a pretty unique combination of two things. Requiering a valid cert and a hardcoded DNS. From a network perspective that leaves you with 3 options:

  1. Block 8.8.8.8 and 8.8.4.4, hope the Chromecast to fallback to the DNS server it got from DHCP, add an unbound override. Hope that Chromecast never updates to use DoT or DoH in the future.
  2. Redirect local DNS queries with NAT redirect. That way Chromecast thinks that it asked 8.8.8.8 while in reality it asked your local DNS. Hope that Chromecast never updates to use DoT or DoH in the future.
  3. Use IPv6

Again, this is only speaking from a network perspective. This is selfhosting basics and everyone that ever tried to host nextcloud or any other service faced that problem. The only difference is the missing hardcoded DNS, so for a Nextcloud instance you only need to create an IPv4 override in unbound. Hope that helps you understand why your statment

(which by the way I'm the only person on the planet to actually confirm this in my k8s/related scenario)

comes off as a little bit silly.

BloodyIron commented 10 months ago
  1. What NAT are you talking about here?
  2. NAT redirect has no relevancy to local resolution because the traffic does not require going through a firewall at the gateway to the internet point, which is where the DNS block (redirect to 127.0.0.1) happens. I don't know why you are so obsessed with NAT being relevant here for the internet gateway, because it is not relevant at all in the solution and scenario of my case. The internet gateway achieves the DNS block with a firewall rule, which 100% does not involve any NAT at all. It intercepts the traffic and redirects it to 127.0.0.1, and the DNS nameserver local to the internet gateway does the resolution. At no point is NAT involved.
  3. IPv6 means I have to completely re-evaluate and overhaul my entire LAN, which has a lot of moving parts, for a theory that it somehow gives me more of a functional solution than I have today. Your preference may be to go with IPv6, and that's fine. But if you still somehow don't see the time-cost and effort involved with IPv6, well I'm not going to explain it yet again.

comes off as a little bit silly.

Why? I searched exhaustively repeatedly this entire year for solutions, and the very reason I posted the full explanation of the solution is so that others could benefit.

Speaking of comes off, you're coming off as a troll. You're not actually listening to the explanation of how the solution I use works, and keep being an unwarranted zealot for NAT capabilities and IPv6 for my already-proven working solution. The motivations for such I really don't see.

Anyways I'm done explaining this to you even further. It sure sounds like you didn't even read the full explanation of the solution I linked above. And if you have not read it, go read it, because I intentionally fleshed it out fully so there would be no ambiguity.

jameskimmel commented 10 months ago

I am not saying using IPv6 is the only option. I am not saying that your solution does not work. I am saying, that your solution is a workaround. I am saying that IPv6 is easier to implement. I am saying that most home users can't apply your workaround, because most routers will not support it.

I think this could be a struggle for a lot uf new users. We could write a page with both solutions. Because right now, the documentation is very sparse on that topic.

JeffFaer commented 3 months ago

Well that's just disheartening. I know not everyone uses casting, but it's a pretty crucial feature compared to the competition. Just sucks that at the very least there isn't more thorough or clear documentation on how exactly a reverse proxy should work to have both local and external support for casting without having hack together something, or have to switch between 192.168.x.x when on the couch and jellyfin.mydomain.com when on the road :|

I believe at one point in time, PublishedServerURL was used to signal the URI value external services should reference (for situations where the server needs to identify where content should be found (like when accessing via https through a proxy from outside the docker network). Not sure on current state, but I believe that was the intention...

That one value was driving the following:

  • Local Network AutoDiscovery
  • DLNA URIs
  • Chromecast via jf-web

I found this comment from @anthonylavado which mentioned that setting as well. I've done some sleuthing and managed to get something working:

  1. Enable HTTPS for your server so that the web client will be able to use the cast API.
  2. Add an entry to Published Server URIs in the Networking settings to point to the HTTP IP address of your server: all=http://192.168.4.175:8086/

    Valid values for the left side of the = seem to be all, internal, external, interface names, or CIDR notation. I tried internal and it didn't work as I expected. I haven't dug in to figure out why not, yet.

  3. You can confirm that this setting has taken effect by vising the /System/Info/Public endpoint of your server and checking the value it reports for LocalAddress.
  4. The web client will tell your chromecast to use that LocalAddress if jellyfin is being accessed from a localhost address. You can modify the JavaScript in your web client to use LocalAddress unconditionally, and then your chromecast will be able to access the media without https or DNS issues.

    It looks like that line was originally added exclusively for localhost support in https://github.com/jellyfin/jellyfin-web/pull/4746, but I suspect it's probably fine to just always use LocalAddress. It seems the Local part of that name is a bit of a misnomer. It's basically always be the URL that was used to access jellyfin, unless the requester is eligible for one of the Pulished Server URI overrides. I'll create a PR to see if I can make this change for everyone

ginger-tek commented 3 months ago

Well that's just disheartening. I know not everyone uses casting, but it's a pretty crucial feature compared to the competition. Just sucks that at the very least there isn't more thorough or clear documentation on how exactly a reverse proxy should work to have both local and external support for casting without having hack together something, or have to switch between 192.168.x.x when on the couch and jellyfin.mydomain.com when on the road :|

I believe at one point in time, PublishedServerURL was used to signal the URI value external services should reference (for situations where the server needs to identify where content should be found (like when accessing via https through a proxy from outside the docker network). Not sure on current state, but I believe that was the intention... That one value was driving the following:

  • Local Network AutoDiscovery
  • DLNA URIs
  • Chromecast via jf-web

I found this comment from @anthonylavado which mentioned that setting as well. I've done some sleuthing and managed to get something working:

  1. Enable HTTPS for your server so that the web client will be able to use the cast API.
  2. Add an entry to Published Server URIs in the Networking settings to point to the HTTP IP address of your server: all=http://192.168.4.175:8086/ Valid values for the left side of the = seem to be all, internal, external, interface names, or CIDR notation. I tried internal and it didn't work as I expected. I haven't dug in to figure out why not, yet.
  3. You can confirm that this setting has taken effect by vising the /System/Info/Public endpoint of your server and checking the value it reports for LocalAddress.
  4. The web client will tell your chromecast to use that LocalAddress if jellyfin is being accessed from a localhost address. You can modify the JavaScript in your web client to use LocalAddress unconditionally, and then your chromecast will be able to access the media without https or DNS issues. It looks like that line was originally added exclusively for localhost support in fix-cast-with-localhost-server jellyfin-web#4746, but I suspect it's probably fine to just always use LocalAddress. It seems the Local part of that name is a bit of a misnomer. It's basically always be the URL that was used to access jellyfin, unless the requester is eligible for one of the Pulished Server URI overrides. I'll create a PR to see if I can make this change for everyone

What the.... THAT WORKED?! Thank you so much @JeffFaer!!!

I just added all=https://jellyfin.mydomain.com/ to the Published URLs setting, and now casting works no problem. I know that's gonna have some latency from going out and back again, but I can add an internal mapping on my router to point local requests to my proxy to my internal IP to improve that.

Why the hell is there's no clear documentation on this?? I've been struggling with casting from JellyFin for 2 YEARS!!

jameskimmel commented 3 months ago

Why the hell is there's no clear documentation on this?? I've been struggling with casting from JellyFin for 2 YEARS!!

Last time I checked a year ago, it was broken. Even with valid certs, even with hairpin NAT even with working dual stack.

In my opinion, Jellyfin should label Chromecast as currently broken in the docs. Or have it listed under "Clients" as beta. With the current situation, to much people are getting their hopes up and wasting time.

ginger-tek commented 3 months ago

Why the hell is there's no clear documentation on this?? I've been struggling with casting from JellyFin for 2 YEARS!!

Last time I checked a year ago, it was broken. Even with valid certs, even with hairpin NAT even with working dual stack.

In my opinion, Jellyfin should label Chromecast as currently broken in the docs. Or have it listed under "Clients" as beta. With the current situation, to much people are getting their hopes up and wasting time.

It does seem to actually work using the all={reverse proxy domain} with a standard reverse proxy without any extra network/router configurations.

I wouldn't say it's broken broken, but I agree that it should be labeled "Beta" at the very least. Definitely too many untested case scenarios to say it's stable.

But at least for me, running a Windows server, using Caddy with a standard reverse proxy, and a NameCheap domain with a subdomain for JellyFin, setting the all={domain} bit has permanently resolved my inability to cast. I had tried setting the Published URL's field before, but had no idea it had a syntax to it!

I will continue testing for reliability, but that's another issue unrelated to networking.

jameskimmel commented 3 months ago

It does seem to actually work using the all={reverse proxy domain} with a standard reverse proxy without any extra network/router configurations.

So with the current config we don't need Hairpin NAT because the client will get the LocalAddress hint from the server? So I asusme, no blocking of the hardcoded Google DNS on the Chromecast needed? No need for a valid cert? Does it work from devices like iPhone with Jellyfin Mobile or from a Windows Laptop with Jellyfin Media Player?

This is not to mock the Chromecast status, I am seriously asking. Because again, a year ago, even from a remote location, with a valid cert, with a NGINX proxy, I was unable to cast from a Windows Laptop or an iPhone. Really wondering if there was some progress.

ginger-tek commented 3 months ago

It does seem to actually work using the all={reverse proxy domain} with a standard reverse proxy without any extra network/router configurations.

So with the current config we don't need Hairpin NAT because the client will get the LocalAddress hint from the server? So I asusme, no blocking of the hardcoded Google DNS on the Chromecast needed? No need for a valid cert? Does it work from devices like iPhone with Jellyfin Mobile or from a Windows Laptop with Jellyfin Media Player?

This is not to mock the Chromecast status, I am seriously asking. Because again, a year ago, even from a remote location, with a valid cert, with a NGINX proxy, I was unable to cast from a Windows Laptop or an iPhone. Really wondering if there was some progress.

Ymmv, but in terms of making the request to the ChromeCast API to cast media from JellyFin over a reverse proxy, yes. Granted, using the hairpin NAT would still help reduce the number of hops, but it still works in the end without one. And no Google DNS hacks required.

Still have some quirks and bugs with the casting experience, but this is a lot better than no casting at all lol