mrlt8 / docker-wyze-bridge

WebRTC/RTSP/RTMP/LL-HLS bridge for Wyze cams in a docker container
GNU Affero General Public License v3.0
2.63k stars 161 forks source link

Huge bandwidth hog #241

Open rpelletierrefocus opened 2 years ago

rpelletierrefocus commented 2 years ago

As soon I I start up this container I start having issues access the internet and other IP cameras (non-wyze) start to intermittently go up and down. I stop the container and everything goes back to normal. I do not see other people complaining of this issue, but thing is completely consuming the bandwidth on my LAN and out through the WAN on my network. This is despite the fact I have the cameras set to SD30.

Any ideas?

mrlt8 commented 2 years ago

Unfortunately, that is one of the cons of having a hub-less IoT device.

You can limit the bridge to accessing the cams over your LAN by setting net mode to LAN only: NET_MODE=LAN

rpelletierrefocus commented 2 years ago

I do have it set to LAN mode, but that made no difference.

On Nov 22, 2021, at 11:16 PM, mrlt8 @.***> wrote:



Unfortunately, that is one of the cons of having a hub-less IoT device.

You can limit the bridge to accessing the cams over your LAN by setting net modehttps://github.com/mrlt8/docker-wyze-bridge#LAN-Mode to LAN only: NET_MODE=LAN

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/mrlt8/docker-wyze-bridge/issues/241#issuecomment-976156219, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ALJUIHGTRXHC46EBOBUVEPDUNMISVANCNFSM5IRKAUYA. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

ballakers commented 2 years ago

As soon I I start up this container I start having issues access the internet and other IP cameras (non-wyze) start to intermittently go up and down. I stop the container and everything goes back to normal. I do not see other people complaining of this issue, but thing is completely consuming the bandwidth on my LAN and out through the WAN on my network. This is despite the fact I have the cameras set to SD30.

Any ideas?

How many cams do you have? I filtered down to one cam (from 7 or 8) and seems to be stable now, like you all my WLED, espHome and tuya local devices were going unavailable constantly when I fired it up and they returned to being happy after I shut er down. So far so good with filtering down cameras but still hesitant it's not worth it even if it slows it down a bit with one camera. Guess we shall see if the add-on stays or goes...

maxfield-allison commented 2 years ago

I've got a bit more information to add after doing some testing on my wired backhaul Unifi AP system. I was using both this container and motioneye in docker to test and figure out what I wanted to do for NVR and homeassistant and noted that the unifi console shows about 40% channel utilization with 3 cams and the motioneye container.

image

When I start this bridge, it's not bandwidth that skyrockets but channel utilization. This causes behavior on the network that would easily be misconstrued as bandwidth consumption but in reality seems to be incredibly high packet counts.

image

Eventually the wifi experience for any access point with more than 2 cameras plummets and becomes virtually unusable on the 2.4GHz band. I haven't bothered to wireshark the traffic in both scenarios but I hope this points someone in the right direction. I also dont mind running a few tests if need be but freetime is scarce these days.

mrlt8 commented 2 years ago

When you get a chance, could you compare the results to streaming multiple cameras in the app as a group? Would be interested to see if we might be able to tweak some stuff to at least be on par with the app.

maxfield-allison commented 2 years ago

This is after 24 hours of not running the bridge, motioneye streaming all of my cameras and the wyze app opened to the group of cameras attached to the kitchen AP. for clarity I have them all locked to this AP

image

It looks like a packet cap is going to be necesarry to see why theres so much traffic with the bridge container. I'll run one when i have a few minutes, might be monday before i can get to it.

I just checked again and this is after a few minutes of streaming to the wyze app as well as motion eye

image

A few minutes more

image

Seems like the utilization spikes might be from other devices but the packet cap will tell us more.

mrlt8 commented 2 years ago

Hey @maxfield-allison, made some tweaks in the dev branch that could potentially help with channel utilization.

Would appreciate some feedback when you have a chance!

maxfield-allison commented 2 years ago

I'll pull the image in a bit and give it a spin.

maxfield-allison commented 2 years ago
image

Still seeing the same behavior. This is also starting to look like the same behavior as #221 and #278. to reiterate im also on the RTSP firmware. anecdotally i was seeing the same behavior on stock firmware before i swapped. heres a capture of the logs from the container, ignore the test cam it is in fact offline

image
MrKuenning commented 2 years ago

I also have been trying to solve this. I have about 12 Wyze cameras on my LAN a mix of v2 and v3. I used to run a tinycam webcam server on an Android VM and feed them to HASS. I never had any issues with wifi load or congestion.

I did have issues with delay and VM stability so I decided to flash the cameras with the RSTP firmware and still had no issues with wifi but missed having a sill images.

So I decided to try motioneye. I added one camera and it worked flawlessly with less delay and great features. However, after adding 5 or so cameras I noticed things on my wifi started dropping like flies. WLED, Nest Speakers, etc. Looking at ubiquity dashboard I saw that all of those cameras were running full throttle sending multiple Mbps over the wifi even when they weren't being viewed. At first, I thought this was a result of having motion detection on, but after disabling everything it still hammered the wifi with heavy transmit. One thing I found bizarre was turning off motioneye didnt stop the streams, I had to reboot the AP or the cameras to get them back to normal. I don't know if its because I was using UDP but it was like motioneye just asked them to all blindly transmit as much as they could and then they turned into zombies.

Then I tried the Docker wyze bridge. But I am seeing very similar results as motioneye. I just turn on the bridge and all of a sudden all of my cameras are transmitting multiple Mbps over the wifi 100% of the time even when no one is looking at a stream.

I don't understand why there is a need to have the cameras sending data even when no one is calling for it. I realize that it asks for a still image every few minutes, but that should be the only data being asked for until someone pulls up one of the cameras on a dashboard.

maxfield-allison commented 2 years ago

That definitely sounds like a bug. I'm running motioneye and my 6 cameras are on the RTSP Firmware. Without motioneye or HASS pulled up and the integration set up, this is all I'm seeing:

image

As soon as I open the motioneye webpage:

image

The traffic spikes but only on the home network im viewing from and the server which is running motioneye in docker. even then, not a ton of traffic. you may have to tune your motioneye settings to 20fps and 1920x1080 but i dont think that can account for the other devices dropping off. are you certain the wyze bridge isnt restarting? I had that issue from my old compose file. you may also try running the dev cversion as was suggested a few posts ago

maxfield-allison commented 2 years ago

@mrlt8 I'm setting up a mirror on my kitchen AP switch port this weekend and doing a wireshark Pcap with the bridge running so we can see whats happening. I'm gonna guess multicast or broadcast traffic is the culprit somehow but I'll post my findings when I can.

mrlt8 commented 2 years ago

@MrKuenning Thanks for the data points! I've never actually used tiny cam as I don't have any android devices, but I believe they use the same tutk SDK as us so performance should be similar. Would you mind sharing which Android VM you use, so I can test things with my cams?

Also, when you say that you "never had any issues with wifi load or congestion", is that with TinyCam streaming from all 12 cams simultaneously or on-demand from each cam?

The bridge is constantly streaming from the cams by design, as I believe most users (including myself) feed our streams into some type of object detection system for motion detection and automation.

On demand streaming is something that I've been wanting to do, especially since the outdoor cams are battery powered, but there are some issues that need to be sorted out before we can get it working.

MrKuenning commented 2 years ago

The android VM was just android x86 installed on ESXi. I used to just use an old phone lying around for it. After doing more testing I found other interesting results.

When I used tinycam, it used the wyze api to connect to the cameras and then relayed them using its own compression. So tinycam never used the RTSP firmware.

Last night I was doing some tests. I added one of my wyse cameras in HASS using ffmpeg over UDP.

When I connect to it via VLC I see it jump to 1-2Mbps and then stop immediately when I close vlc. When I connect to the camera in home assistant I see it jump to 1-2Mbps but when I close the stream or even the browser, I see the camera still using 1-2Mbps long afterward. (At this point it has been 15 minutes)

I don't know if its a bug with the RSTP firmware, or ffmpeg failing to stop. But is seems like with the exception of VLC any time a tool calls for a wyse cam RTSP stream it fails to tell the stream it's no longer needed and please stop sending data.

Tinycam server and the Wyse app can have 12 cameras in them and only call for them when they are needed. But other tools seem to make the initial call and then the cameras just get stuck hammering the network.

mrlt8 commented 2 years ago

We are pretty much doing the same thing as tinycam - using the TUTK SDK library to pull the h264 stream from the cameras and copy it to an RTSP stream, which can then be pulled from a third-party integration like Homebridge/Home Assistant at any time.

The RTSP firmware, on the other hand, is publishing the RTSP stream directly on the camera so the stream only gets pulled whenever needed.

However, many, including myself, need the bridge to provide a constant stream as we are actively processing the stream in an object/motion detection systems.

As mentioned before, on-demand streaming is a planned feature.

What @maxfield-allison is trying to figure out is why, on a one-to-one comparison between the bridge and the official app, the bridge is consuming more of the wifi channel.

maxfield-allison commented 2 years ago

haven't forgotten, just got busy this weekend. still planning on packet cap edit to mention i want it streaming 24x7 as well for motion detection.

maxfield-allison commented 2 years ago

Haven't gotten around to the wireshark yet but I did find some interesting information. While browsing other issues I saw mention of the continuous recording and notification/detection settings on the cams and how they may be affecting things. As soon as I opened the settings on the 2 v3 cams and started turning down detection sensitivity, people detection and other "pro" features, and turning off continuous recording, Unifi reported the same massive channel utilization I was seeing when starting the bridge. I'm looking into it further but im feeling like the issue is either related to the age of the RTSP firmware or the new detection features.

maxfield-allison commented 2 years ago

I've got the packet cap but haven't analyzed it yet. From first glance, tons of additional udp traffic from the cams to the server.

maxfield-allison commented 2 years ago

@mrlt8 I have the packet cap files. I can provide them to you raw if you'd like to dig through. They capture everything on my security camera network both with and without the bridge running. not too worried about obfuscating it as long as I can DM you the drive link or something. All I really see that may be causing the issue is the stream data coming from the camera itself. it looks like maybe using UDP to connect locally is flooding the network and taking up all the airtime at least on the access points with more than 1-2 cameras. to note: the access point that is having the problem on my network is one of 3, and it's a UAP-AC-LR. if you're unfamiliar it's more for long range open air communication and doesn't have the throughput capabilities of the other two, a UAP-HD-Nano and a UAP-AC-Pro. I'm also up for scheduling some time on a weekend or a weekday evening to hop in a call or just bounce back and forth on a thread here to do some live testing and troubleshooting. Let me know what works for you, no rush and thanks again for your work. This project is super awesome.

mrlt8 commented 2 years ago

Thanks for the detailed info!

I don't think I could do much with the captures, but I'll try to dig through the TUTK library to see if it's possible to somehow force it to use TCP or make some other connection adjustments.

maxfield-allison commented 2 years ago

I'm also going to do some further testing to see if it's that access point in particular and if so, what factors contribute. might move one of the more capable stations to its spot and see if I have the same behavior.

maxfield-allison commented 2 years ago

I tested a few changes in configuration, from lan to p2p, storm control on the AP ports at 100pps for broadcast and multicast. changed Docker container network mode to host from bridge. No major changes in behavior observed. I noted that the container causes 98% channel utilization with about 60-odd percent being Rx traffic and 1.5Mbps. With only the rtsp stream from the firmware, I'm at about 30% utilization with 22% Rx at 4 Mbps. I went ahead and pulled a debug log as well and focused in on my deck cam but all it tells me is that a connection is established then dropped ad nauseum. Next things I'm going to try are removing all but a single cam from operation on that access point and then swapping out the AC-LR for an AC-Pro just to kick the can a bit further. I'm hesitant to flash the newest wyze firmware over the RTSP but if we dont get any farther and I have a rainy day, I'll probably bother with it.

mrlt8 commented 2 years ago

hmm, I've been trying to get IOTC_TCPRelayOnly_TurnOn to work, but it kept going to relay mode until I switched from bridge to host mode.

Not sure if it helps, but you can test it out by setting IOTC_TCP in your env with the dev branch/images.

Unfortunately, Docker desktop (at least on MacOS) doesn't support host mode, so you'll need to run it on an os that supports host mode.

maxfield-allison commented 2 years ago

All good, I've been on linux for awhile. I'll give it a shot in a few

maxfield-allison commented 2 years ago

So bizarre. Same behavior but the logs are a bit different. heres a snippet of a few lines here and there, most repeat several times a second.

2022/05/08 06:37:11 [py.warnings][WARNING][Deck Cam] WARNING: Skipping smaller frame at start of stream (frame_size=1) 2022/05/08 06:37:17 [py.warnings][WARNING][Driveway Cam] WARNING: Frame not available yet 2022/05/08 06:37:17 [wyzecam.iotc][DEBUG][Outdoor Cam] Connect via IOTC_Connect_ByUID_Parallel 2022/05/08 06:37:17 [py.warnings][WARNING][Driveway Cam] WARNING: Frame not available yet 2022/05/08 06:37:17 [py.warnings][WARNING][Driveway Cam] WARNING: Frame not available yet

still lots of traffic I'm just not sure where or why. I'm starting to think that it isnt your app and more likely an interaction with specific networking situations. I just dont know enough to be able to find the root cause.

mrlt8 commented 2 years ago

That is strange. Did it connect over a relay server mode: 1 (0: P2P mode, 1: Relay mode, 2: LAN mode) under SInfoStructEx?

maxfield-allison commented 2 years ago

That is strange. Did it connect over a relay server mode: 1 (0: P2P mode, 1: Relay mode, 2: LAN mode) under SInfoStructEx?

mode 2 under info structure

2022/05/08 06:44:30 [wyzecam.tutk.tutk_ioctl_mux][DEBUG][Deck Cam] RECV <TutkWyzeProtocolHeader prefix=b'HL' protocol=29 code=10009 txt_len=701>: b'{"connectionRes":"1","cameraInfo":{"videoParm":{"type":"H264","bitRate":"120","resolution":"1","fps":"20","horizontalFlip":"2","verticalFlip":"2","logo":"1","time":"1"},"settingParm":{"stateVision":"1","nightVision":"2","osd":"1","logSd":"1","logUdisk":"1", "telnet":"2","tz":"-5"},"basicInfo":{"firmware":"4.61.0.3","type":"camera","hardware":"0.0.0.0","model":"WYZE_CAKP2JFUS","mac":"7C78B21AD8FF","wifidb":"89"},"channelResquestResult":{"video":"1","audio":"1"},"recordType":{"type":"3"},"sdParm":{"status":"1","capacity":"29652","free":"1093","detail":"0"},"uDiskParm":{"status":"2","capacity":"0","free":"0"},"apartalarmParm":{"type":"0","startX":"25","longX":"50","startY":"25","heightY":"50"}}}',

2022/05/08 06:44:30 [wyzecam.tutk.tutk_ioctl_mux][DEBUG][Deck Cam] SEND <K10056SetResolvingBit code=10056 resp_code=10057> <TutkWyzeProtocolHeader prefix=b'HL' protocol=1 code=10056 txt_len=3> b'\x01x\x00',
2022/05/08 06:44:30 [wyzecam.tutk.tutk_ioctl_mux][DEBUG][Deck Cam] RECV <TutkWyzeProtocolHeader prefix=b'HL' protocol=29 code=10057 txt_len=1>: b'\x01',

2022/05/08 06:44:31 [wyzecam.tutk.tutk_ioctl_mux][DEBUG][Deck Cam] No longer listening on channel id 0,
SInfoStructEx:,
    size: 156,
    mode: 2,
    uid: b'FNJRTB64R21G5EW2111A',
    remote_ip: b'192.168.15.207',
    remote_port: 33944,
    tx_packet_count: 55,
    rx_packet_count: 149,
    iotc_version: 50399986,
    vendor_id: 49193,
    product_id: 62209,
    group_id: 61763,
    local_nat_type: 3,
    remote_nat_type: 3,
    net_state: 1,
    remote_wan_ip: b'0.0.0.0',
2022/05/08 06:44:31 [WyzeBridge][INFO][Deck Cam] [videoParm] {'type': 'H264', 'bitRate': '120', 'resolution': '1', 'fps': '20', 'horizontalFlip': '2', 'verticalFlip': '2', 'logo': '1', 'time': '1'},
maxfield-allison commented 2 years ago

Ok, Removed the Driveway and Deck v3 cams by turning them off in the wyze app so only the sunroom v2 cam is running.

#      - NET_MODE=P2P
#      - QUALITY=HD120
      - ENABLE_AUDIO=True
      - DEBUG_LEVEL=debug
      - FPS_FIX=true
      - IOTC_TCP=true
    network_mode: host
image

Adding back the deck cam

image

removed the deck cam via app power off,

image

and added the driveway cam

image

removed all cams and added only the deck v3 cam

image

then added the driveway v3 cam

image

So it definitely looks like adding any more than 1 of the v3 cams running rtsp firmware to a unifi AP-AC-LR is a bad idea. And at this point I'm starting to lean more towards the access point, the rtsp firmware being so old for the v3 (they took it down from their site recently btw), or some combo. Still some weird behavior and it only shows up when the bridge is active but im getting more convinced its coincidental, not causal.

maxfield-allison commented 2 years ago

I've got a nest hello doorbell, a V2 and a Pan v1 connected to a unifi UAP-HD-Nano (along with laptops and phones galore) and its sitting at just over 30% utilization on a different wifi channel. I did try changing the channels around but that didnt make any difference either. I have another rtsp v3 sitting here, I'm going to add it to the HD-NANO really quick and see if I get the same broken behavior on that AP. According to the previous tests, it should send utilization up to the 90% range. this ap has more antennas though so maybe it wont be quite as high. but that could narrow it down to the antenna configuration of the access points in these instances. still pretty crazy utilization and I'm sure there's more optimization to be done on the bridge's end, but this is a general problem with wifi iot devices regardless.

maxfield-allison commented 2 years ago

I think the problem is incapable access points being overloaded by higher intensity applications than they're designed for. Added the rtsp v3 test cam to motion eye and connected it to the nanohd with all those other devices

image

barely sweating now. Upped the frame rate in motion eye to 20fps and the resolution to 1080p and it sometimes jumps to 60% but hovers around 40% utilization.

So again, maybe more optimization can be done on the bridge side of things but i dont have a good idea as to how to accomplish that besides implementing some pretty aggressive compression. I'm assuming the bridge is what decodes the single cam stream into multiple formats and serves them but if its the camera offering every stream type separately, maybe adding switches to disable what isn't needed?

mrlt8 commented 2 years ago

We request the raw h264 frames from the camera and "copy" that to rtsp-simple-server which then create the rtsp/rtmp/hls streams so no decoding is done unless some you do some video processing like the doorbell rotation.

You can disable the additional streams by passing the env options to rtsp-simple-server: e.g., RTSP_RTMPDISABLE=yes

The multi-cam issue could be related to the fact that we had to switch from threads to processes for each camera due to a blocking issue we had after switching to the newer tutk library to support DTLS and authkey on the newer camera firmware.

I could try to revisit the issue to see if maybe we could somehow reuse the same tutk session for all the cameras without having a single camera taking all the streams down.

cheme75 commented 2 years ago

I'm kind of lost here, but definitely started seeing gobs of bandwidth usage 14-17 gb/day/cam, plus the bridge using about a gb/hr, I assume this is all within the LAN so maybe not a big deal. I really never paid much attention to them when only on the wyze app, but seems like overall they used less. I haven't accessed anything on homekit outside of the LAN yet. I filtered out the 2 flaky beta db cams but doesn't seem to have reduced bridge usage. On the brightside the bridge is only hitting on 10-13% cpu usage, so PC seems to be running decent.

mrlt8 commented 2 years ago

@cheme75 is that on the WAN? You can set - NET_MODE=LAN to prevent the bridge from connecting to the cams over relay or p2p mode.

cheme75 commented 2 years ago

I’ll set that tomorrow and recheck. I didn’t log into the router yet to check it. It’s just the app view of the devices on the wifi. Minimal info. Just using the arris modem/router from att for the time being. My personal router is kind of outdated and this seemed to cover house without any dead spots. Will log in tomorrow to see if it offers more info.

cheme75 commented 2 years ago

@cheme75 is that on the WAN? You can set - NET_MODE=LAN to prevent the bridge from connecting to the cams over relay or p2p mode.

@mrlt8 now just getting this error over and over for all 3 cams - commented out net mode and they are using relay: 2022/05/10 10:33:17 [Front-Door] ☁️ Connected via NON-LAN MODE! Reconnecting

anyway, I'll watch router log to check actual I/O bytes

maxfield-allison commented 2 years ago

@cheme75 is that on the WAN? You can set - NET_MODE=LAN to prevent the bridge from connecting to the cams over relay or p2p mode.

@mrlt8 now just getting this error over and over for all 3 cams - commented out net mode and they are using relay: 2022/05/10 10:33:17 [Front-Door] ☁️ Connected via NON-LAN MODE! Reconnecting

anyway, I'll watch router log to check actual I/O bytes

This looks like your server isn't able to connect directly to the cams due to some firewall issue.

cheme75 commented 2 years ago

@cheme75 is that on the WAN? You can set - NET_MODE=LAN to prevent the bridge from connecting to the cams over relay or p2p mode.

@mrlt8 now just getting this error over and over for all 3 cams - commented out net mode and they are using relay: 2022/05/10 10:33:17 [Front-Door] ☁️ Connected via NON-LAN MODE! Reconnecting anyway, I'll watch router log to check actual I/O bytes

This looks like your server isn't able to connect directly to the cams due to some firewall issue.

Probably, I had to allow 8888 and 8554 to get this far. On a positive note, since getting the 1.4.3 update just a bit ago, things look very stable in the log.

maxfield-allison commented 2 years ago

in the future, you can always try the dev branch by tagging it in your compose image: mrlt8/wyze-bridge:dev especially if you're following these issue threads.

cheme75 commented 2 years ago

in the future, you can always try the dev branch by tagging it in your compose image: mrlt8/wyze-bridge:dev especially if you're following these issue threads.

Maybe once I get a lot more comfortable. Right now I’m just feeling confident enough to use docker-compose pull to get an update without screwing it up 😂

maxfield-allison commented 2 years ago

found this on the wyze forums

image

apparently it's been an issue for years. That said, I dont have the same issues with the native app so maybe it's something they worked around in newer firmware? I've also got a new wifi 6 AP on the way and I'm about at the point where I think I may flash all my cameras to the newest firmware and then try to make something work that way. I was originally using the rtsp into motion eye for presence detection in rooms with HAss but I've got other ways to do that at this point.

mrlt8 commented 2 years ago

I'm guessing there wasn't much of a difference with the threading build wyze-bridge:threading? It reuses the same tutk session and connects to each camera over a different channel like the app does.

I'll have to decompile the app again to see if they've made any changes to the connection method (or does something different in the multi-cam view), but I believe they may be trying to move towards a WebRTC based stream via AWS Kinesis (KVS) which is similar to what the web view is using. You can actually pull the webrtc credentials in the bridge using WEBRTC=true.

maxfield-allison commented 2 years ago

Didn't even see that tag mentioned earlier, I'll give it a shot.

maxfield-allison commented 2 years ago

There is a definite and noticeable improvement. I still see a few spikes in utilization but it isn't consistent anymore. This is with an additional pan v2 and the 2 v3's and single v2 flashed back to stock. Previously I swapped to stock and saw the same behavior but this threaded build is certainly on the right track.

maxfield-allison commented 2 years ago

A little bit longer and it's holding up at only 40% utilization while streaming the V2 over http. Some hiccups in the buffer and it's about 30 seconds behind according to the time stamp but that could be lack of time sync and that I'm on my phone. I'm going to leave it running overnight and post tomorrow morning with performance data. Super stoked to be able to leave the container on overnight finally!!

mrlt8 commented 2 years ago

Awesome!

There is major bug that needs to be fixed with the threaded version where the tutk library will sometimes block while connecting to the camera which then takes down all the other streams.

maxfield-allison commented 2 years ago

Understood, For what it's worth this has been running without taking out my network all night. I do see the bug you're talking about however. It started around 4 am and has been pinging off in the log till now.

2022/05/13 04:01:19 [RTSP][DRIVEWAY-CAM] ❌ '/driveway-cam' stream is down
2022/05/13 04:01:19 [DRIVEWAY-CAM_AUDIO] WARNING: Audio pipe closed
2022/05/13 04:01:19 [Driveway Cam] Stream did not receive a frame for over 20s
2022/05/13 04:01:20 [WyzeBridge] 🎉 Connecting to WyzeCam V3 - Driveway Cam on 192.168.15.206 (1/3)
2022/05/13 04:01:24 [Driveway Cam] ☁️ WARNING: Camera is connected via RELAY mode. Stream may consume additional bandwidth!
2022/05/13 04:01:26 [Driveway Cam] 📡 Getting 120kb/s HD stream (20fps) via RELAY mode (WiFi: 88%) FW: 4.36.9.131 🔒 (DTLS) (2/3)
2022/05/13 04:01:26 [Driveway Cam] 🔊 Audio Enabled - ALAW/16,000Hz
2022/05/13 04:01:29 [Driveway Cam] WARNING: Waiting for keyframe
2022/05/13 04:01:33 [Sunroom Cam] WARNING: Frame not available yet
2022/05/13 04:01:46 [Driveway Cam] WARNING: Still waiting for first frame. Updating frame size.
2022/05/13 04:01:46 [Driveway Cam] Requesting frame_size=0 and bitrate=120
2022/05/13 04:01:49 [Sunroom Cam] WARNING: FPS param mismatch (avRecv FPS=15)
2022/05/13 04:02:16 [DRIVEWAY-CAM_AUDIO] WARNING: Audio pipe closed
Fatal Python error: init_import_site: Failed to import the site module
Python runtime state: initialized
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site.py", line 73, in <module>
    import os
  File "/usr/local/lib/python3.10/os.py", line 61, in <module>
    import posixpath as path
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1012, in get_code
  File "<frozen importlib._bootstrap_external>", line 672, in _compile_bytecode
KeyboardInterrupt
2022/05/13 04:02:16 [Driveway Cam] Stream did not receive a frame for over 20s
2022/05/13 04:02:17 [WyzeBridge] 🎉 Connecting to WyzeCam V3 - Driveway Cam on 192.168.15.206 (1/3)
2022/05/13 04:02:38 [RTSP][DEN-CAM] ❌ '/den-cam' stream is down
2022/05/13 04:02:38 [RTSP][SUNROOM-CAM] ❌ '/sunroom-cam' stream is down
2022/05/13 04:02:38 [RTSP][LIVING-ROOM-PAN-CAM] ❌ '/living-room-pan-cam' stream is down
2022/05/13 04:02:39 [RTSP][SUNROOM-PAN-CAM] ❌ '/sunroom-pan-cam' stream is down
2022/05/13 04:02:39 [WyzeBridge] ⏰ Timed out connecting to Driveway Cam (20s).
2022/05/13 04:02:39 [WyzeBridge] 
IOTC is blocked!
2022/05/13 04:02:39 [RTSP][GARAGE-CAM] ❌ '/garage-cam' stream is down
2022/05/13 04:02:49 [WyzeBridge] 🎉 Connecting to WyzeCam V3 - Driveway Cam on 192.168.15.206 (1/3)
2022/05/13 04:03:11 [WyzeBridge] ⏰ Timed out connecting to Driveway Cam (20s).
2022/05/13 04:03:11 [WyzeBridge] 
IOTC is blocked!
2022/05/13 04:03:21 [WyzeBridge] 🎉 Connecting to WyzeCam V3 - Driveway Cam on 192.168.15.206 (1/3)
2022/05/13 04:03:43 [WyzeBridge] ⏰ Timed out connecting to Driveway Cam (20s).
2022/05/13 04:03:43 [WyzeBridge] 
IOTC is blocked!
2022/05/13 04:03:53 [WyzeBridge] 🎉 Connecting to WyzeCam V3 - Driveway Cam on 192.168.15.206 (1/3)
2022/05/13 04:04:15 [WyzeBridge] ⏰ Timed out connecting to Driveway Cam (20s).
2022/05/13 04:04:15 [WyzeBridge] 
IOTC is blocked!

etc. etc. etc.
mrlt8 commented 2 years ago

Completely bizarre, but setting TUTK_SDK_Set_Region_Code seems to clear up the blocking issue, but introduces some other weirdness.. It seems to require host mode or fails to find any cams and offline cams show as IOTC_ER_FAIL_RESOLVE_HOSTNAME instead of offline?

updated build can be found on :threading

maxfield-allison commented 2 years ago

Honestly setting to host isn't a huge issue and I'd assume it shouldn't be for almost all use cases. I'll pull and test momentarily. There might be some work around for it down the line in any case. Even though there may not be many use cases, i still think it would be detrimental overall to have to force host.

maxfield-allison commented 2 years ago

getting timeouts all over

  wyze-bridge:
    container_name: wyze-bridge
    restart: unless-stopped
    image: mrlt8/wyze-bridge:threading
    # build:  # Uncomment to build from source
    #     context: ./app # Uncomment to build from source
    #     # dockerfile: Dockerfile.arm # Uncomment to build for arm
#    ports:
#      - 1935:1935
#      - 8554:8554
#      - 8888:8888
    network_mode: host
    environment:
      - WYZE_EMAIL=# Replace with wyze email 
      - WYZE_PASSWORD=# Replace with wyze password
#      - NET_MODE=LAN
#      - QUALITY=HD120
      - ENABLE_AUDIO=True
#      - DEBUG_LEVEL=debug
      - FPS_FIX=true
#      - IOTC_TCP=true
      - FILTER_NAMES=Outdoor Cam, rtsp test cam, Lab Cam
      - FILTER_BLOCK=true
    security_opt:
      - no-new-privileges:true
2022/05/13 15:36:13 [Garage Cam] IOTC_ER_FAIL_CONNECT_SEARCH
2022/05/13 15:36:13 [Den Cam] IOTC_ER_FAIL_CONNECT_SEARCH
2022/05/13 15:36:13 [Sunroom Pan Cam] IOTC_ER_FAIL_CONNECT_SEARCH
2022/05/13 15:36:13 [Living Room Pan Cam] IOTC_ER_FAIL_CONNECT_SEARCH
2022/05/13 15:36:23 [WyzeBridge] 🎉 Connecting to WyzeCam Pan V2 - Sunroom Pan Cam on 192.168.15.51 (1/3)
2022/05/13 15:36:23 [WyzeBridge] 🎉 Connecting to WyzeCam V3 - Driveway Cam on 192.168.15.206 (1/3)
2022/05/13 15:36:23 [WyzeBridge] 🎉 Connecting to WyzeCam V3 - Deck Cam on 192.168.15.207 (1/3)
2022/05/13 15:36:23 [WyzeBridge] 🎉 Connecting to WyzeCam V2 - Sunroom Cam on 192.168.15.201 (1/3)
2022/05/13 15:36:23 [WyzeBridge] 🎉 Connecting to WyzeCam V2 - Den Cam on 192.168.15.202 (1/3)
2022/05/13 15:36:23 [WyzeBridge] 🎉 Connecting to WyzeCam V2 - Garage Cam on 192.168.15.203 (1/3)
2022/05/13 15:36:23 [WyzeBridge] 🎉 Connecting to WyzeCam Pan - Living Room Pan Cam on 192.168.15.204 (1/3)
2022/05/13 15:36:44 [Sunroom Cam] IOTC_ER_TIMEOUT
2022/05/13 15:36:44 [Deck Cam] IOTC_ER_TIMEOUT
2022/05/13 15:36:44 [Driveway Cam] IOTC_ER_TIMEOUT
2022/05/13 15:36:45 [WyzeBridge] ⏰ Timed out connecting to Sunroom Pan Cam (20s).

session_id=1
stop=0
2022/05/13 15:36:45 [WyzeBridge] ⏰ Timed out connecting to Driveway Cam (20s).

session_id=2
stop=0
2022/05/13 15:36:45 [WyzeBridge] ⏰ Timed out connecting to Deck Cam (20s).

session_id=3
stop=0
2022/05/13 15:36:45 [WyzeBridge] ⏰ Timed out connecting to Sunroom Cam (20s).

session_id=4
stop=0
2022/05/13 15:36:45 [WyzeBridge] ⏰ Timed out connecting to Den Cam (20s).

session_id=5
stop=0
2022/05/13 15:36:45 [WyzeBridge] ⏰ Timed out connecting to Garage Cam (20s).

session_id=6
stop=0
2022/05/13 15:36:45 [WyzeBridge] ⏰ Timed out connecting to Living Room Pan Cam (20s).

session_id=7
stop=0
2022/05/13 15:36:45 [Sunroom Pan Cam] IOTC_ER_FAIL_CONNECT_SEARCH
2022/05/13 15:36:45 [Garage Cam] IOTC_ER_FAIL_CONNECT_SEARCH
2022/05/13 15:36:45 [Living Room Pan Cam] IOTC_ER_FAIL_CONNECT_SEARCH
2022/05/13 15:36:45 [Den Cam] IOTC_ER_FAIL_CONNECT_SEARCH
mrlt8 commented 2 years ago

That looks similar to what I was getting on my mac, but I was able to get a version of the container running on an ubuntu server. Definitely some networking issue with docker as I was also able to connect to my cams when running tests outside the container in macOS.

Will need to do some more research, as the OG wyze app seems to be using an older version of the sdk (<4.0) and some of the methods they are using are deprecated in 4.0+.

Unfortunately, I believe host mode isn't supported on docker desktop, so it would pretty much end compatibility with windows/macOS.