mrlt8 / docker-wyze-bridge

WebRTC/RTSP/RTMP/LL-HLS bridge for Wyze cams in a docker container
GNU Affero General Public License v3.0
2.7k stars 176 forks source link

Camera Stream Stops #1037

Open hoveeman opened 1 year ago

hoveeman commented 1 year ago

Docker Container v 2.5.0

All cameras in my configuration run fine for a while and then a camera will no longer be streaming after 12+ hours. This has occurred twice since upgrading to 2.5.0 . The cameras are still connected on wifi and are live streaming in the Wyze app. Restarting the container brings them back online.

I didn't have debug logging enabled and only had the following info in my log for when it stopped. I changed logging to debug and will share those logs when the cameras go offline again.

[garage] WARNING: Audio pipe closed
[garage] [CONTROL] ERROR - error=TutkError(-20018), cmd=('param_info', '1,2,5,6,7,21,22,27,50')
[garage] [CONTROL] ERROR - error=TutkError(-20018), cmd='_bitrate'
[garage] Stream did not receive a frame for over 15s
mrlt8 commented 1 year ago

Can you see if setting ON_DEMAND=False keeps the stream alive?

hoveeman commented 1 year ago

I added that variable. Streams still stop occasionaly, but do seem to start back up.

When I noticed the issue, I would be in Home Assistant viewing my streams and 1 or 2 cameras wouldn't load and then I'd check the webui for WyzeBridge and the camera wouldn't be streaming. It would just have the Pause Icon next to the camera. The stream wouldn't come back unless I restarted the container or clicked on the pause icon in the webui.

I'll keep an eye on the cameras now that I have ON_DEMAND set to False.

Here were the logs.

2023/11/06 14:18:54 [DEBUG][albus] [CONTROL] Attempting to GET: param_info
2023/11/06 14:18:54 [DEBUG][albus] Now listening on channel id 0
2023/11/06 14:18:54 [DEBUG][albus] SEND <K10020CheckCameraParams code=10020 resp_code=10021> <TutkWyzeProtocolHeader prefix=b'HL' protocol=5 code=10020 txt_len=12> b'\x0b\x01\x02\x03\x04\x05\x06\x07\x15\x16\x1b2'
2023/11/06 14:18:54 [DEBUG][albus] RECV <TutkWyzeProtocolHeader prefix=b'HL' protocol=32 code=10021 txt_len=75>: b'{"1":2,"2":3,"3":180,"4":2,"5":15,"6":1,"7":1,"21":1,"22":-5,"27":2,"50":1}'
2023/11/06 14:18:54 [DEBUG][albus] No longer listening on channel id 0
2023/11/06 14:18:54 [DEBUG][albus] [CONTROL] response={'1': 2, '2': 3, '3': 180, '4': 2, '5': 15, '6': 1, '7': 1, '21': 1, '22': -5, '27': 2, '50': 1}
2023/11/06 14:19:09 [DEBUG][albus] [CONTROL] Attempting to GET: param_info
2023/11/06 14:19:09 [DEBUG][albus] Now listening on channel id 0
2023/11/06 14:19:09 [DEBUG][albus] SEND <K10020CheckCameraParams code=10020 resp_code=10021> <TutkWyzeProtocolHeader prefix=b'HL' protocol=5 code=10020 txt_len=12> b'\x0b\x01\x02\x03\x04\x05\x06\x07\x15\x16\x1b2'
2023/11/06 14:19:10 [DEBUG][albus] RECV <TutkWyzeProtocolHeader prefix=b'HL' protocol=32 code=10021 txt_len=75>: b'{"1":2,"2":3,"3":180,"4":2,"5":15,"6":1,"7":1,"21":1,"22":-5,"27":2,"50":1}'
2023/11/06 14:19:11 [DEBUG][albus] No longer listening on channel id 0
2023/11/06 14:19:11 [DEBUG][albus] [CONTROL] response={'1': 2, '2': 3, '3': 180, '4': 2, '5': 15, '6': 1, '7': 1, '21': 1, '22': -5, '27': 2, '50': 1}
2023/11/06 14:19:26 [DEBUG][albus] [CONTROL] Attempting to GET: param_info
2023/11/06 14:19:26 [DEBUG][albus] Now listening on channel id 0
2023/11/06 14:19:26 [DEBUG][albus] SEND <K10020CheckCameraParams code=10020 resp_code=10021> <TutkWyzeProtocolHeader prefix=b'HL' protocol=5 code=10020 txt_len=12> b'\x0b\x01\x02\x03\x04\x05\x06\x07\x15\x16\x1b2'
2023/11/06 14:19:26 [DEBUG][albus] RECV <TutkWyzeProtocolHeader prefix=b'HL' protocol=32 code=10021 txt_len=75>: b'{"1":2,"2":3,"3":180,"4":2,"5":15,"6":1,"7":1,"21":1,"22":-5,"27":2,"50":1}'
2023/11/06 14:19:26 [DEBUG][albus] No longer listening on channel id 0
2023/11/06 14:19:26 [DEBUG][albus] [CONTROL] response={'1': 2, '2': 3, '3': 180, '4': 2, '5': 15, '6': 1, '7': 1, '21': 1, '22': -5, '27': 2, '50': 1}
2023/11/06 14:19:41 [DEBUG][albus] [CONTROL] Attempting to GET: param_info
2023/11/06 14:19:41 [DEBUG][albus] Now listening on channel id 0
2023/11/06 14:19:41 [DEBUG][albus] SEND <K10020CheckCameraParams code=10020 resp_code=10021> <TutkWyzeProtocolHeader prefix=b'HL' protocol=5 code=10020 txt_len=12> b'\x0b\x01\x02\x03\x04\x05\x06\x07\x15\x16\x1b2'
2023/11/06 14:19:41 [DEBUG][doorbell] RECV <TutkWyzeProtocolHeader prefix=b'HL' protocol=36 code=10021 txt_len=61>: b'{"1":1,"2":3,"5":20,"6":1,"7":1,"21":1,"22":-5,"27":2,"50":1}'
2023/11/06 14:19:41 [DEBUG][albus] RECV <TutkWyzeProtocolHeader prefix=b'HL' protocol=32 code=10021 txt_len=75>: b'{"1":2,"2":3,"3":180,"4":2,"5":15,"6":1,"7":1,"21":1,"22":-5,"27":2,"50":1}'
2023/11/06 14:19:41 [DEBUG][albus] No longer listening on channel id 0
2023/11/06 14:19:41 [DEBUG][albus] [CONTROL] response={'1': 2, '2': 3, '3': 180, '4': 2, '5': 15, '6': 1, '7': 1, '21': 1, '22': -5, '27': 2, '50': 1}
2023/11/06 14:19:56 [DEBUG][albus] [CONTROL] Attempting to GET: param_info
2023/11/06 14:19:56 [DEBUG][albus] Now listening on channel id 0
2023/11/06 14:19:56 [DEBUG][albus] SEND <K10020CheckCameraParams code=10020 resp_code=10021> <TutkWyzeProtocolHeader prefix=b'HL' protocol=5 code=10020 txt_len=12> b'\x0b\x01\x02\x03\x04\x05\x06\x07\x15\x16\x1b2'
2023/11/06 14:19:59 [DEBUG][albus] RECV <TutkWyzeProtocolHeader prefix=b'HL' protocol=32 code=10021 txt_len=75>: b'{"1":2,"2":3,"3":180,"4":2,"5":15,"6":1,"7":1,"21":1,"22":-5,"27":2,"50":1}'
2023/11/06 14:19:59 [DEBUG][albus] No longer listening on channel id 0
2023/11/06 14:19:59 [DEBUG][albus] [CONTROL] response={'1': 2, '2': 3, '3': 180, '4': 2, '5': 15, '6': 1, '7': 1, '21': 1, '22': -5, '27': 2, '50': 1}
2023/11/06 14:20:02 [WARNING][albus] WARNING: Audio pipe closed
2023/11/06 14:20:03 [INFO][WyzeBridge] 📕 Client stopped reading from albus
2023/11/06 14:20:03 [WARNING][albus] Stream did not receive a frame for over 15s
2023/11/06 14:20:14 [INFO][WyzeBridge] ❌ '/albus' stream is down
stevland commented 1 year ago

I had this issue a while ago.

I added ON_DEMAND=False to my config and things have been running consistently for weeks.

For the past few days every time I visit my Dashboard the streams have stopped. I did installed updates to both Supervisor and Core around the time that I started noticing this, but who knows?

The only way to get them going again is to visit Docker Wyze Bridge > Controls > (camera) > Enable + Start.

stevland commented 1 year ago

Is it possible to Enable + Start the streams using an hourly automation?

mrlt8 commented 1 year ago

You should be able to via the API, but setting ON_DEMAND to false should be doing that already.

Potentially related to some underlying change to with the updated MediaMTX v2.5.x #1036

hoveeman commented 1 year ago

I was still having the cameras stop streaming with On_Demand=false. I have since deleted that variable and the cameras are still working 24 hours later. I have debug logging enabled and will be watching to see what happens.

stevland commented 1 year ago

I was still having the cameras stop streaming with On_Demand=false. I have since deleted that variable and the cameras are still working 24 hours later. I have debug logging enabled and will be watching to see what happens.

That absolutely does not make sense but I tried it myself just now, and it seems to be a fix on this end, too.

None of my streams were working (unless I hit "Enable" and "Start" for each one).

But after removing On Demand=false from the yaml all 3 of my streams are running with no other intervention required. It's only been 5 minutes, but fingers crossed. 🤞

c0f commented 1 year ago

I've had the same problem with ON_DEMAND enabled and disabled. I have four V3 Wyzecams and one V2 Wyzecam.

Cameras will randomly stop recording after several hours.

I've written a script to detect whether a camera has stopped recording and then the script restarts the docker container if any of my cameras haven't created a file in the last 11 minutes. I run the script via a cronjob every 9 minutes. A restart of the container always fixes the problem (temporarily).

I have a feeling that this might be caused by poor WiFi connectivity to one of my cameras. However, I've tried excluding cameras in the docker-compose.yml and I still have problems with recordings stopping.

I've had debug enabled for a while and my script moves the old log file so I have a snapshot of the log right up to the point where the container is restarted. I can't see anything obvious in the log file or a pattern that would explain why the recording stops.

I've attached an example debug file that shows a 40 minute run from restart to restart. debug.log.drive.stopped.2023-11-08_193601.log

hoveeman commented 1 year ago

I was still having the cameras stop streaming with On_Demand=false. I have since deleted that variable and the cameras are still working 24 hours later. I have debug logging enabled and will be watching to see what happens.

Still going without any streams stopping after removing that variable.

AngusNB commented 1 year ago

I was still having the cameras stop streaming with On_Demand=false. I have since deleted that variable and the cameras are still working 24 hours later. I have debug logging enabled and will be watching to see what happens.

I have done the same and it seems to have fixed the issue. I use Docker Wyze Bridge in Home Assistant running on a Raspberry Pi 3.

stevland commented 1 year ago

Hey @mrlt8, thanks for the quick update (2.5.1) to address this issue.

AngusNB commented 1 year ago

Hey @mrlt8, thanks for the quick update (2.5.1) to address this issue.

I was using 2.5.1 when I noticed the issue.

Removing On_Demand fixed it for me.

mrlt8 commented 1 year ago

Hmm it should be fixed in 2.5.1. Can you try rebuilding the container?

AngusNB commented 1 year ago

I see that removing On_Demand is the same as setting On_Demand=true. This works better for me than On_Demand=false.

I am using the RTSP feeds in tinyCam. If I set On_Demand=false two of my cameras stop responding.

jdeath commented 11 months ago

edit: Until this is fixed, I am brute forcing it. I have homeassistant send a rest command every minute to all cameras to start. Looks like it works.

Adding to this. I have a few cameras and they have been hanging a lot more often. Manytimes a hung camera can be brought back with a rest call: http://192.168.1.XX:5000/api/basement-hall/state/start

My logs are here. Running 2.6.0 on homeassistant with ondemand=true (I tried false and also have problems). All cameras go to frigate, so the stream should never be brought down. Anyways to have it restart automatically after a tutk error? sometimes it comes back itself, sometimes not. I plan to try a homeassistant automation to try to reset them. Rest API calls work, mqtt does not for me

Below you can see the errors, then me running the restart rest to bring them back

I also found my v3 Pan needs quality HD120 or it will complain about bitrate

[basement-hall] [CONTROL] ERROR - error=TutkError(-20018), cmd=('param_info', '1,2,3,4,5,6,7,21,22,27,50') [basement-hall] Stream did not receive a frame for over 15s [WyzeBridge] ❌ '/basement-hall' stream is down [WyzeBridge] 📕 Client stopped reading from basement-hall [WyzeBridge] 📕 Client stopped reading from backyard [WyzeBridge] 📖 New client reading from backyard [WyzeBridge] 📕 Client stopped reading from side-door [WyzeBridge] 📖 New client reading from side-door Requesting frame_size=0, bitrate=180, fps=0 [WyzeBridge] 📕 Client stopped reading from side-door [side-door] [CONTROL] ERROR - error=TutkError(-20018), cmd=('param_info', '1,2,3,4,5,6,7,21,22,27,50') [side-door] Stream did not receive a frame for over 15s [WyzeBridge] ❌ '/side-door' stream is down [WyzeBridge] 📕 Client stopped reading from backyard [WyzeBridge] 📖 New client reading from backyard [WyzeBridge] 📕 Client stopped reading from front-door [WyzeBridge] 📕 Client stopped reading from backyard [backyard] [CONTROL] ERROR - error=TutkError(-20018), cmd=('param_info', '1,2,5,6,7,21,22,27,50') [backyard] [CONTROL] ERROR - error=TutkError(-20019), cmd='_bitrate' [backyard] Stream did not receive a frame for over 15s [WyzeBridge] ❌ '/backyard' stream is down [WyzeBridge] 📖 New client reading from front-door [WyzeBridge] [CONTROL] SET basement-hall state=start [WyzeBridge] 🎉 Connecting to WyzeCam Pan V3 - Basement Hall on 192.168.1.50 [WyzeBridge] 192.168.1.37 - - [29/Dec/2023 16:46:42] "GET /api/basement-hall/state/start HTTP/1.1" 200 - [basement-hall] 📡 Getting 120kb/s HD stream (H264/20fps) via LAN mode (WiFi: 59%) FW: 4.50.4.7252 🔒 (DTLS) (2/3) [basement-hall] WARNING: Skipping smaller frame at start of stream (frame_size=1) [WyzeBridge] ✅ '/basement-hall stream is UP! (3/3) [WyzeBridge] 📖 New client reading from basement-hall [WyzeBridge] [CONTROL] GET backyard state [WyzeBridge] 192.168.1.37 - - [29/Dec/2023 16:47:20] "GET /api/backyard/state HTTP/1.1" 200 - [WyzeBridge] [CONTROL] SET backyard state=start [WyzeBridge] 🎉 Connecting to WyzeCam V3 - Backyard on 192.168.1.53 [WyzeBridge] 192.168.1.37 - - [29/Dec/2023 16:47:31] "GET /api/backyard/state/start HTTP/1.1" 200 - [backyard] 📡 Getting 180kb/s HD stream (H264/20fps) via LAN mode (WiFi: 45%) FW: 4.36.11.7095 🔒 (DTLS) (2/3) [backyard] WARNING: Skipping smaller frame at start of stream (frame_size=1) [WyzeBridge] 📖 New client reading from backyard [WyzeBridge] ✅ '/backyard stream is UP! (3/3) [WyzeBridge] 📕 Client stopped reading from backyard [WyzeBridge] 📖 New client reading from backyard [WyzeBridge] [CONTROL] SET side-door state=start [WyzeBridge] 🎉 Connecting to WyzeCam Doorbell - Side Door on 192.168.1.165 [WyzeBridge] 192.168.1.37 - - [29/Dec/2023 16:49:00] "GET /api/side-door/state/start HTTP/1.1" 200 - [side-door] 📡 Getting 180kb/s HD stream (H264/20fps) via LAN mode (WiFi: 95%) FW: 4.25.1.316 🔒 (DTLS) (2/3) [side-door] Re-encoding using libx264 [transpose='clock'] [side-door] WARNING: Skipping smaller frame at start of stream (frame_size=4) [WyzeBridge] ✅ '/side-door stream is UP! (3/3) [WyzeBridge] 📖 New client reading from side-door [WyzeBridge] 📕 Client stopped reading from side-door [WyzeBridge] 📖 New client reading from side-door

Scope666 commented 10 months ago

Just started having a similar problem, before recently these cameras never dropped at all, now they constantly drop / reconnect and eventually fall behind time-wise. I'm thinking Wyze changed something on their end.

image
AngusNB commented 10 months ago

I noticed yesterday, one of my cameras was 20 minutes behind.

I saw a car pull in my driveway. Then I saw me getting out. 😂