Closed TheFuzz4 closed 3 years ago
Are you running frigate behind a reverse proxy? The new live view uses websockets. Try inspecting the page in the browser to see what errors are in the console.
Also, I would definitely remove your output_args. The defaults changed in 0.9.0, and you are overriding them.
I am kind of running it behind a reverse proxy. I'm using the tunnel from cloudflare. I do it this way because I can expose it to the internet but using CloudFlare Teams I can lock down who has access to it.
Removed the output_args from the config.
Ok hit it directly via IP and the streams do load that way. Now to figure out how to get a websocket to pass through the cloudflare tunnel. I'll let you know what I find for others that might decide to try out the tunnel.
I lied I haven't solved this yet but I am making headway will update once I have the correct configurations.
@mitchross this sounds similar to your issue
Yes very similiar.. @TheFuzz4 https://github.com/blakeblackshear/frigate/issues/1527
The only difference I have is that I'm hosting on unraid -> nginix reverse proxy manager ( docker ) -> cloudflare tunnel.
This issue for sure is websockets, but I can't pin point the issue. Its defiantly related to https://github.com/blakeblackshear/frigate/blob/release-0.9.0/web/src/components/JSMpegPlayer.jsx
The actual JSMpegPlayer library is failing to connect to the WS and is going into an infinite retry loop.
Ive debugged everything. Here are some of the things Ive tried
Web(internet) -> Unraid direct IP:port ( works )... aka just port forward unraid docker hosting frigate... This is expected to work, but it's our control variable.
Web (internet ) -> nginix reverse proxy manager -> frigate unraid docker instance... Both HTTPS and NO HTTPS and bypass cloudflare ( fails )
Web (internet ) -> cloudflare -> frigate unraid docker instance.. ( bypass nginix reverse proxy manager ) ( fails)
@TheFuzz4 The craziest thing I've found so far to work is to goto the live stream camera page, restart docker, paste the live stream URL into browser ( ie new tab ) and it will work until you navigate away. Its not a solution, it's just an observation.
@mitchross yeah I think you and I are pretty much in the same boat I just don't have Unraid (I'm a oldschool freenas/Trunas and I don't want to make the switch)
My setup is a dedicated VM running Ubuntu with the cloudflared daemon on it.
What works for me is smacking the host directly and all of that works great.
I could setup a route through HA Proxy and see if that works and bypass the cloudflared but ultimately I'd like to figure out how to get it to work with cloudflared because I like the "no firewall holes needed" approach.
I stayed up late last night trying every scenario under the sun with different paths in the ingress for the cloudflared
/live/
/ws/
If there are other paths that are needed for the WS to make the connection I'll gladly add them to my ingress rule in the cloudflared config.
I even went so far as to put a catch all rule in the config so that anything not matching the http service in the ingress would default over to the ws protocol
Here is my current config
ingress:
- hostname: frigate.hostname.com
service: http://localhost:5000
- service: ws://localhost:5000
I'm not going to give up on this one. I'm looking through the jsx now to see what I can find in there. I'm no java developer just a cloud infra engineer who works in AKS all day :). But I can typically read code and get most of the gist of what its attempting to do.
Update:
const url = `${baseUrl.replace(/^http/, 'ws')}/live/${camera}`
Not sure if it matters or not here but for those of using cloudflare our traffic is all sent over https. I'll have to plug in the HA Proxy and have CF route to me that way and see if this makes a difference or not I doubt it would but if this is doing a replace on http in the baseurl in our case it replaces https with wss. It shouldn't make a difference I wouldn't imagine since we're running just straight ws:// in the container. I dunno just a thought.
@TheFuzz4 Yea ive hit pure VM and no unraid too.. Unraid is irrelevant in this scenario, just threw it out there for extra context.
For what its worth, I custom compiled a version with " const url = ${baseUrl.replace(/^http/, 'wss')}/live/${camera}
for the websocket secure method. It doesn't make a difference. Nginix reverse proxy and/or cloudflared already upgrade the protocol and will add the extra 's' automatically... so you end up with wsss if you try to change the regex.
The issue is in the core of the jsmpeg player, I'm pretty sure of it. https://github.com/cycjimmy/react-jsmpeg-player-demo/issues/7
"JSMpeg comes with a tiny WebSocket "relay", written in Node.js. This server accepts an MPEG-TS source over HTTP and serves it via WebSocket to all connecting Browsers. The incoming HTTP stream can be generated using ffmpeg, gstreamer or by other means." - https://github.com/phoboslab/jsmpeg#streaming-via-websockets
Websocket Relay code -> https://github.com/phoboslab/jsmpeg/blob/master/websocket-relay.js
This may be of interest -> https://github.com/kyriesent/node-rtsp-stream/issues/37 but the project assumes you can use your cert/pem in your code which we probably cannot since its on the fly with cloudflare.
The only way I could see around this, while probably insecure-ish ( though I don't care at this point). Would be to find a way around upgrading Websocket over SSL ( ie WSS ) via cloud flare or ngnix or whatever.
or maybe some sorta of relay of the relay hack https://stackoverflow.com/questions/45802281/is-it-possible-to-relay-a-websocket-through-nginx-over-tls
@blakeblackshear if you have any ideas, feel free to chime in. Im 100% now the issue is that jsmpeg player doesn't support WSS or if it does cloudflared/ngnix ( pick your favorite ssl/lets encypt toolkit ) is jacking with the connection.
LOL free job if u can solve this - https://www.upwork.com/freelance-jobs/apply/Configure-jsmpeg-display-video-stream-from-WSS-HTTPS-page_~01c75fa685d8f4f048/
I am using mine with https/wss just fine. Also, I am not using the node rtsp relay from the jsmpeg project.
I am using mine with https/wss just fine. Also, I am not using the node rtsp relay from the jsmpeg project.
Ok interesting. How are you using https? lets encrypt? also are you accessing over a (sub)domain?
Let's encrypt with traefik. I use a subdomain dedicated to frigate. https://frigate.blakeshome.com
Hmm so this might just be with Cloudflare doing its reverse proxy thing?
Hmm so this might just be with Cloudflare doing its reverse proxy thing?
I don't know. Im completely stumped.
@TheFuzz4
I made a simple nodeJS websocket server and using cloudflare + nginix reverse proxy and everything is fine over ssl/wss. So I don't think the issue is 100% cloudflares fault, but related. Im still trying to figure out what the heck is going on.
@TheFuzz4 I've given up hope. I ended up reverting this change https://github.com/blakeblackshear/frigate/commit/861ee0485d47d9a1441c9d825a39bdcbff4edd07 and built my own docker image and I'm just gonna use that.
@mitchross I wonder if you can get your image to be in the repo for Frigate. Sorry I haven't had a chance to test out some things this week been slammed with work things. I've got a few cycles today though I can throw to you, to help test and what not.
@mitchross I wonder if you can get your image to be in the repo for Frigate. Sorry I haven't had a chance to test out some things this week been slammed with work things. I've got a few cycles today though I can throw to you, to help test and what not.
thellamafarm#7609 on discord if you want to chat in more real time.
I'm moving this comment over here. I upgraded to the beta and have received the following in my logs.
self.result = application(self.environ, self.start_response)
File "/usr/local/lib/python3.8/dist-packages/ws4py/server/wsgiutils.py", line 101, in __call__
raise HandshakeError('Header %s is not defined' % key)
ws4py.exc.HandshakeError: Header HTTP_UPGRADE is not defined
When I open the UI viewer and click the live view on my camera, this is in my logs. When I click debug, everything is working as expected. I'm running in docker, on a pi4 with NGINX Proxy Manager on my network.
@cdn4lf do you see logs like this? This is what I see.
0.0.1:5002 | Remote => 127.0.0.1:42792] [2021-09-02 17:22:49] ws4py INFO : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49006] [2021-09-02 17:22:50] ws4py INFO : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49006] [2021-09-02 17:22:52] ws4py INFO : Terminating websocket [Local => 127.0.0.1:5002 | Remote => 127.0.0.1:42792] [2021-09-02 17:22:56] ws4py INFO : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49008] [2021-09-02 17:22:56] ws4py INFO : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49008] [2021-09-02 17:23:01] ws4py INFO : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49010] [2021-09-02 17:23:02] ws4py INFO : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49010] [2021-09-02 17:23:07] ws4py INFO : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49012] [2021-09-02 17:23:08] ws4py INFO : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49012] [2021-09-02 17:23:13] ws4py INFO : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49014] [2021-09-02 17:23:15] ws4py INFO : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49014] [2021-09-02 17:23:20] ws4py INFO : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49016] [2021-09-02 17:23:20] ws4py INFO : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:49016]
I haven't yet. What did you do immediately before seeing that? I can try to recreate it myself.
@cdn4lf I saw that error when developing the frigate proxy addon. See here for an nginx config that I know works: https://github.com/blakeblackshear/frigate-hass-addons/tree/main/frigate_proxy/rootfs/etc/nginx
I'm using NPM, so I turned on websocket support and that seems to have cleared the NGINX issues. Only comment now is the live stream seems to take about 10 seconds to come up.
I'm using NPM, so I turned on websocket support and that seems to have cleared the NGINX issues. Only comment now is the live stream seems to take about 10 seconds to come up.
How are you doing https? Try your setup with cloudflare and you will run into these issues.
https with let's encrypt. No change regardless of http vs https. There is still a delay in calling up the live feed, anywhere from 1/2 second to 10+.
Edit to add: This also happens when I view with the local IP address/port, so it isn't a https issue.
https with let's encrypt. No change regardless of http vs https. There is still a delay in calling up the live feed, anywhere from 1/2 second to 10+.
Are you portforwarding on router? Are you using a custom domain too ?
The issue I have is between Dns -> cloudflare -> (argo tunnel / dns )-> ngninx -> frigate
Nope, nothing regarding this on the router. Using the local network I'm going to my 192.168.x.x:15000 (I changed the port). When there it opens the UI. Direct connection, nothing fancy.
On HA I have it mapped using iframe and a subdomain. That's going HA out, ddns server (external), nginx/let's encrypt, frigate
Nope, nothing regarding this on the router. Using the local network I'm going to my 192.168.x.x:15000 (I changed the port). When there it opens the UI. Direct connection, nothing fancy.
On HA I have it mapped using iframe and a subdomain. That's going HA out, ddns server (external), nginx/let's encrypt, frigate
How do you access externally if you don't port forward? Afaik you can only avoid port forwarding by using a tunnel of some sort...
assuming ddns like duck dns are you doing something like frigate.duckdns.org ?
Just curious how you go from the internet to HA without port forwarding..
I'm using NGINX, so everything is inbound to the router on port 80/443, NGINX does the distribution from there.
@blakeblackshear Where in the python code can I put a breakpoint for debugging the jsmpeg player web socket connection?
output.py
output.py
Im trying to figure out where the log "terminating websocket comes from". Ive got break points working so trying to compare local host vs cloudflare to see if I can get a better picture of whats going on.
It's almost certainly buried deep in a dependency.
It's almost certainly buried deep in a dependency.
On a local, I get this everytime.
On cloudflare'd I get this first
Followed by ( notice the server terminated "true" flag )
This "2 step" process repeats over and over.
It's probably some issue with the handshake. If I had to guess, it's an obscure bug in the ws4py library. The plan is to move to fastapi eventually, but that will be a significant effort.
It's probably some issue with the handshake. If I had to guess, it's an obscure bug in the ws4py library. The plan is to move to fastapi eventually, but that will be a significant effort.
Its probably not the most ideal, but it would be cool if you had a settings option for "use legacy live stream player" for use with the non jsmpeg player.
Right now I just build my own image reverting the jsmpeg play for now. Only downside is I have to keep up with changes on the v9 branch and rebuild my docker image. Not the end of the world tho...
Once the final release is out, it won't be much to keep up with.
Once the final release is out, it won't be much to keep up with.
Great point. Im good with that.
@blakeblackshear One last idea... Is there a way I can not proxy the web sockets thru ssl ?
Looks like the web socket runs on 8082... wonder if i can expose that in the compse file and do some sort of custom path in ngnix and have SSL off for just /live/ paths?
You can override the nginx config however you want. I'm not sure how you have implemented auth, but having everything on a single domain/port makes standard cookie based auth easy. There are other things I'm probably not thinking about. The browser may not like unsecured websocket connections from a https site.
@TheFuzz4 I got some new info
For fun, I decided to set up Hass OS and add the frigate nvr proxy add on. I think used my normal setup of ngnix reverse proxy manage + cloudflare.
I can see the video streams in the frigate proxy addon in hass OS over SSL.
I haven't quite figured out a solve for the normal UI route, but this is a bit promising.
@TheFuzz4
I did some more experimentation. I'm using SWAG container and removed cloudflare from the picture. The base docker v9 works fine over https. I'm convinced it's an issue with Cloudflare certs now.
@mitchross thanks for the updates on this. So do you think its with the public certs? Interesting that there would be an issue with those.
@TheFuzz4
I tried to use SWAG + lets encrypt cert and then point my go daddy DNS at cloudflare so I at least had cloud flare protection and it still broke. So I cant seem to pin point what the issue is in cloud flare.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@mitchross Did you ever get further on this? Have been following your troubleshooting steps and trying to pin point things myself as well. I've got the majority of my services exposed via CF Tunnels & Teams.
@mitchross Did you ever get further on this? Have been following your troubleshooting steps and trying to pin point things myself as well. I've got the majority of my services exposed via CF Tunnels & Teams.
Im glad you asked because I gave up since I figured it was impossible. Looks like its finally resolved. https://github.com/cloudflare/cloudflared/issues/526
Thanks for the link, @mitchross... I'd missed this update from CF in my googling. Looks like swapping to QUIC has improved this and some other cases for me. Thanks!
Thanks for the link, @mitchross... I'd missed this update from CF in my googling. Looks like swapping to QUIC has improved this and some other cases for me. Thanks!
Can you share your Cloudflared config file?
Sure. I assume you're wanting to see how QUIC can be enabled to fix this issue. It is configured at the root of the cloudflared config file. See an example below (note that standard YAML spacing rules apply):
tunnel: <Tunnel ID>
credentials-file: /home/<path to json credentials file>.json
protocol: quic
ingress:
# HTTP Services
- hostname: example-service.<mydomain>.me
service: http://x.x.x.x
# SSH Services
- hostname: example-ssh.<mydomain>.me
service: ssh://x.x.x.x
# Catch-all rule, which just responds with 404 if traffic doesn't match any of the earlier rules
- service: http_status:404
That did the trick, thank you!
Complete config file for Cloudflare's Tunnel Service package, cloudflared
, for the next googler:
tunnel: <Tunnel UUID>
credentials-file: /root/.cloudflared/<Tunnel UUID>.json
protocol: quic
ingress:
- hostname: homeassistant.<domain>.com #Note: Remember to allow trusted_proxy in HA config
service: http://<IP ADDR>:8123
- hostname: frigate.<domain>..com
service: http://<IP ADDR>:5000
- hostname: frigate.<domain>..com
path: /ws
service: ws://<IP ADDR>:5000
- hostname: frigate.<domain>.com
path: /live
service: ws://<IP ADDR>:5000
- service: http_status:404
Describe the bug When clicking on a camera the live stream just shows a white or black box that then flashes Version of frigate Output from
/api/version
Config file Include your full config file wrapped in triple back ticks.
Frigate container logs
Screenshots Computer Hardware
Camera Info: Not camera specific happens with all of my cameras
Additional context Just updated to the RC-1 to test it out and check it out and I like living on the edge