Closed wehbs closed 1 year ago
Basically that means that connection to model took too long to respond, can you post your model logs, im assuming you are running the CoreML.py script separately ?
00:18:54 [INFO] TRIGGERED NEW OBJECT @ COORD: {331 23} AREA: 3312.000000 [car|0.837891] 00:18:56 [NOTICE] Inference avg: 25.100000ms, min: 22.000000ms, max: 32.000000ms 00:18:58 [NOTICE] Inference avg: 25.600000ms, min: 21.000000ms, max: 30.000000ms 00:19:00 [NOTICE] Inference avg: 25.700000ms, min: 21.000000ms, max: 30.000000ms 00:19:02 [NOTICE] Inference avg: 25.300000ms, min: 21.000000ms, max: 37.000000ms 00:19:04 [NOTICE] Inference avg: 23.600000ms, min: 20.000000ms, max: 29.000000ms 00:19:06 [INFO] TRIGGERED NEW OBJECT @ COORD: {598 66} AREA: 2460.000000 [car|0.761230] 00:19:07 [NOTICE] Inference avg: 24.800000ms, min: 21.000000ms, max: 31.000000ms 00:19:09 [NOTICE] Inference avg: 26.600000ms, min: 20.000000ms, max: 32.000000ms 00:19:11 [NOTICE] Inference avg: 25.500000ms, min: 21.000000ms, max: 33.000000ms 00:19:13 [NOTICE] Inference avg: 24.100000ms, min: 21.000000ms, max: 32.000000ms 00:19:15 [NOTICE] Inference avg: 25.600000ms, min: 21.000000ms, max: 32.000000ms 00:19:17 [NOTICE] Inference avg: 26.900000ms, min: 22.000000ms, max: 32.000000ms 00:19:19 [NOTICE] Inference avg: 26.100000ms, min: 21.000000ms, max: 33.000000ms 00:19:21 [NOTICE] Inference avg: 25.700000ms, min: 21.000000ms, max: 31.000000ms 00:19:23 [NOTICE] Inference avg: 23.700000ms, min: 20.000000ms, max: 29.000000ms 00:19:25 [NOTICE] Inference avg: 24.600000ms, min: 21.000000ms, max: 29.000000ms 00:19:27 [NOTICE] Inference avg: 25.200000ms, min: 20.000000ms, max: 32.000000ms 00:19:29 [NOTICE] Inference avg: 24.700000ms, min: 21.000000ms, max: 32.000000ms 00:19:32 [NOTICE] Inference avg: 24.000000ms, min: 20.000000ms, max: 32.000000ms 00:19:34 [NOTICE] Inference avg: 22.800000ms, min: 21.000000ms, max: 26.000000ms 00:19:36 [NOTICE] Inference avg: 25.300000ms, min: 20.000000ms, max: 32.000000ms 00:19:36 [INFO] MOTION_ENDED 00:20:19 [NOTICE] Inference avg: 24.600000ms, min: 21.000000ms, max: 29.000000ms 00:21:11 [NOTICE] Inference avg: 25.600000ms, min: 21.000000ms, max: 31.000000ms 00:22:26 [NOTICE] Inference avg: 25.000000ms, min: 20.000000ms, max: 32.000000ms 00:22:52 [INFO] TRIGGERED NEW OBJECT @ COORD: {559 264} AREA: 11921.000000 [car|0.682129] 00:22:53 [NOTICE] Inference avg: 26.100000ms, min: 22.000000ms, max: 33.000000ms 00:22:59 [ERROR] Error running objectPredict: operation timed out
Sorry took a while to repro. I have firescrew running in Docker on M1 mac mini. I then connect it to CoreML on the same mac mini.
Not sure if these are the logs you're looking for.
Hey, i was referring to the the python script logs, so the ones the python script produced. You are running pythonCoreMl.py script on mac and connecting to it remotely from docker right ? Basically the error means that firescrew has not gotten a response from the script in 5 sec meaning it either crashed or there was a network issue
I see, nothing much from what I see:
Server is listening on 0.0.0.0:8555 Got connection from ('192.168.0.39', 52743) No objects detected Closing connection Got connection from ('192.168.0.39', 59553) Closing connection Got connection from ('192.168.0.39', 60073) No objects detected Closing connection Got connection from ('192.168.0.39', 62582) No objects detected Closing connection Got connection from ('192.168.0.39', 64364) No objects detected Closing connection Got connection from ('192.168.0.39', 50639) No objects detected Closing connection
Does it work after you restart firescrew ?
So I just restarted both the object detection script and firescrew but the error keeps happening quick now back to back. Maybe theres something up on my local machine?
On another note ffprobe is now returning back info for my camera stream.
But it looks like firescrew is reading the fps incorrectly should be 25:
01:20:34 [INFO] Hi-Res Stream Resolution: 3840x2160 FPS: 12.58 edit: actually looks like the camera is just showing the wrong fps.
It's also throwing this error: 01:20:27 [WARNING] OnlyRemuxMp4 is enabled but the stream codec is not h264 or h265. Your videos may not play in WebUI. Codec: hevc
"streams": [ { "index": 0, "codec_name": "hevc", "codec_long_name": "H.265 / HEVC (High Efficiency Video Coding)", "profile": "Main", "codec_type": "video", "codec_tag_string": "[0][0][0][0]", "codec_tag": "0x0000", "width": 3840, "height": 2160, "coded_width": 3840, "coded_height": 2160, "closed_captions": 0, "film_grain": 0, "has_b_frames": 0, "pix_fmt": "yuv420p", "level": 150, "color_range": "tv", "chroma_location": "left", "refs": 1, "r_frame_rate": "151/12", "avg_frame_rate": "0/0", "time_base": "1/90000", "start_pts": 7216, "start_time": "0.080178", "extradata_size": 79, "disposition": { "default": 0, "dub": 0, "original": 0, "comment": 0, "lyrics": 0, "karaoke": 0, "forced": 0, "hearing_impaired": 0, "visual_impaired": 0, "clean_effects": 0, "attached_pic": 0, "timed_thumbnails": 0, "captions": 0, "descriptions": 0, "metadata": 0, "dependent": 0, "still_image": 0 } }, { "index": 1, "codec_name": "aac", "codec_long_name": "AAC (Advanced Audio Coding)", "profile": "LC", "codec_type": "audio", "codec_tag_string": "[0][0][0][0]", "codec_tag": "0x0000", "sample_fmt": "fltp", "sample_rate": "16000", "channels": 1, "channel_layout": "mono", "bits_per_sample": 0, "r_frame_rate": "0/0", "avg_frame_rate": "0/0", "time_base": "1/16000", "start_pts": 0, "start_time": "0.000000", "extradata_size": 2, "disposition": { "default": 0, "dub": 0, "original": 0, "comment": 0, "lyrics": 0, "karaoke": 0, "forced": 0, "hearing_impaired": 0, "visual_impaired": 0, "clean_effects": 0, "attached_pic": 0, "timed_thumbnails": 0, "captions": 0, "descriptions": 0, "metadata": 0, "dependent": 0, "still_image": 0 } } ] }
when I add the override params for the streams it seems to use the lores params for both:
01:30:23 [INFO] **** STREAM INFO **** 01:30:23 [INFO] Lo-Res Stream Resolution: 640x360 FPS: 15.00 01:30:23 [INFO] Hi-Res Stream Resolution: 640x360 FPS: 15.00
"loStreamParamBypass": { "width": 640, "height": 360, "fps": 15 },
"hiStreamParamBypass": { "width": 3840, "height": 2160, "fps": 25 },
About the frame rate, its read from ffprobe output as "r_frame_rate": "151/12",
which is actually
r_frame_rate is "the lowest framerate with which all timestamps can be represented accurately (it is the least common multiple of all framerates in the stream)."
avg_frame_rate is just that: total duration / total # of frames
So its getting it correctly eg: 151/12 = 12.5, but regardless, there is nothing to worry about as its not actually used in the code and is just for information.
As for the initial error that you mentioned, i suspect there way be a race condition where it will try to connect before connection is established, im looking into that.
when I add the override params for the streams it seems to use the lores params for both:
Good catch, i just fixed it and added a timeout for the connection issue, please try latest version that should be build in a few minutes.
yea I noticed that at the last min, Reolink cams, nice video quality, shit metadata lol. Ok, yea awesome will give it a go when ready and report back.
Good news, i was able to get Yolov8 models to work through onnx runtime in go with cpu and with CoreML. Meaning there will be no more need for these python adapters to use yolov8 cpu/coreml and cuda (in the future)
when I add the override params for the streams it seems to use the lores params for both:
Good catch, i just fixed it and added a timeout for the connection issue, please try latest version that should be build in a few minutes.
Ok, so I tried it out but am still getting the error after about 30 secs.
Good news, i was able to get Yolov8 models to work through onnx runtime in go with cpu and with CoreML. Meaning there will be no more need for these python adapters to use yolov8 cpu/coreml and cuda (in the future)
That's awesome!
I'm going to reboot and try again.
In order to manually test connectivity can you try running
docker run --rm -it --entrypoint /bin/bash 8fforg/firescrew:latest -c "apt-get update 1>/dev/null; apt-get install netcat-openbsd 1>/dev/null; nc -v -n MAC_IP 8555"
Where MAC_IP is the ip of mac where your python script is running You should see something like
Connection to 1.2.3.4 8555 port [tcp/*] succeeded!
If connection is successfull.
docker run --rm -it --entrypoint /bin/bash 8fforg/firescrew:latest -c "apt-get update 1>/dev/null; apt-get install netcat-openbsd 1>/dev/null; nc -v -n 192.168.0.39 8555" Connection to 192.168.0.39 8555 port [tcp/*] succeeded!
looks good.
Thanks, i think i know what the issue issue is, to confirm can you please also provide the firescrew log (minus the config details)
03:44:49 [INFO] TRIGGERED NEW OBJECT @ COORD: {559 264} AREA: 12052.000000 [car|0.506348] 03:44:49 [NOTICE] Inference avg: 22.500000ms, min: 18.000000ms, max: 42.000000ms 03:44:53 [NOTICE] Inference avg: 23.700000ms, min: 20.000000ms, max: 33.000000ms 03:44:58 [ERROR] Error running objectPredict: operation timed out
This is what I got immediately after the config details.
Thanks, ill prep a fix and let you know once ready
Can you please try the latest version. i've added some checks and logging. If error persists, please send full log
Ok, giving it a go now.
Hasn't failed yet, so that's promising! Last night it was taking less than a minute to occur, its been about 20 now.
Oh, i though it was happening when you start the app, if its disconnecting after a while its different part of code. Ill take a look
I've made a few more fixes to reconnect in case of connection failure during runtime new version is building
It's still going solid no dropouts.
omg, it literally just happened as I sent that!
Jinxed it
lol seriously.
17:37:43 [INFO] Checking connection to: 192.168.0.39:8555 17:37:49 [INFO] TRIGGERED NEW OBJECT @ COORD: {357 253} AREA: 17574.000000 [car|0.761230] 17:37:49 [INFO] TRIGGERED NEW OBJECT @ COORD: {353 253} AREA: 16600.000000 [car|0.748047] 17:37:49 [NOTICE] Inference avg: 28.600000ms, min: 22.000000ms, max: 56.000000ms 17:37:50 [NOTICE] Inference avg: 24.800000ms, min: 22.000000ms, max: 33.000000ms 17:37:50 [NOTICE] Inference avg: 26.100000ms, min: 22.000000ms, max: 32.000000ms 17:37:53 [NOTICE] Inference avg: 26.200000ms, min: 19.000000ms, max: 31.000000ms 17:37:53 [NOTICE] Inference avg: 26.500000ms, min: 23.000000ms, max: 30.000000ms 17:37:54 [NOTICE] Inference avg: 24.800000ms, min: 21.000000ms, max: 29.000000ms 17:37:55 [NOTICE] Inference avg: 26.700000ms, min: 21.000000ms, max: 32.000000ms 17:37:56 [NOTICE] Inference avg: 27.200000ms, min: 21.000000ms, max: 37.000000ms 17:37:57 [NOTICE] Inference avg: 28.400000ms, min: 22.000000ms, max: 34.000000ms 17:37:57 [NOTICE] Inference avg: 24.100000ms, min: 20.000000ms, max: 30.000000ms 17:37:58 [NOTICE] Inference avg: 27.500000ms, min: 23.000000ms, max: 36.000000ms 17:37:59 [NOTICE] Inference avg: 26.800000ms, min: 23.000000ms, max: 32.000000ms 17:38:00 [NOTICE] Inference avg: 24.800000ms, min: 23.000000ms, max: 28.000000ms 17:38:01 [NOTICE] Inference avg: 27.600000ms, min: 24.000000ms, max: 32.000000ms 17:38:01 [NOTICE] Inference avg: 26.300000ms, min: 22.000000ms, max: 34.000000ms 17:38:02 [NOTICE] Inference avg: 24.300000ms, min: 22.000000ms, max: 27.000000ms 17:38:03 [NOTICE] Inference avg: 25.800000ms, min: 23.000000ms, max: 32.000000ms 17:38:04 [NOTICE] Inference avg: 24.100000ms, min: 19.000000ms, max: 29.000000ms 17:38:05 [NOTICE] Inference avg: 27.000000ms, min: 22.000000ms, max: 30.000000ms 17:38:05 [NOTICE] Inference avg: 25.400000ms, min: 20.000000ms, max: 29.000000ms 17:38:05 [INFO] TRIGGERED NEW OBJECT @ COORD: {227 309} AREA: 2124.000000 [person|0.860352] 17:38:06 [INFO] TRIGGERED NEW OBJECT @ COORD: {362 254} AREA: 18666.000000 [car|0.766113] 17:38:06 [NOTICE] Inference avg: 26.500000ms, min: 23.000000ms, max: 30.000000ms 17:38:06 [INFO] TRIGGERED NEW OBJECT @ COORD: {360 254} AREA: 17952.000000 [car|0.710449] 17:38:07 [NOTICE] Inference avg: 25.800000ms, min: 21.000000ms, max: 28.000000ms 17:38:08 [NOTICE] Inference avg: 24.900000ms, min: 22.000000ms, max: 29.000000ms 17:38:09 [NOTICE] Inference avg: 27.100000ms, min: 24.000000ms, max: 31.000000ms 17:38:09 [NOTICE] Inference avg: 25.600000ms, min: 24.000000ms, max: 30.000000ms 17:38:09 [INFO] TRIGGERED NEW OBJECT @ COORD: {343 253} AREA: 15150.000000 [car|0.712891] 17:38:10 [NOTICE] Inference avg: 24.600000ms, min: 19.000000ms, max: 30.000000ms 17:38:11 [INFO] TRIGGERED NEW OBJECT @ COORD: {279 239} AREA: 696.000000 [person|0.722168] 17:38:11 [NOTICE] Inference avg: 28.200000ms, min: 24.000000ms, max: 32.000000ms 17:38:12 [NOTICE] Inference avg: 26.000000ms, min: 24.000000ms, max: 31.000000ms 17:38:17 [ERROR] Error running objectPredict: operation timed out
Closing connection Got connection from ('192.168.0.39', 62314) Closing connection Got connection from ('192.168.0.39', 64577) Closing connection Got connection from ('192.168.0.39', 64628) Closing connection Got connection from ('192.168.0.39', 52418) Closing connection Got connection from ('192.168.0.39', 52592) No objects detected Closing connection
I just pushed another update with only tcp timeouts remaining, please try it now I'm literally running the same setup as you at the moment
19:44:31 [NOTICE] Inference avg: 28.800000ms, min: 22.000000ms, max: 32.000000ms 19:44:33 [NOTICE] Inference avg: 29.800000ms, min: 28.000000ms, max: 32.000000ms 19:44:35 [NOTICE] Inference avg: 27.100000ms, min: 22.000000ms, max: 31.000000ms 19:44:37 [NOTICE] Inference avg: 29.300000ms, min: 23.000000ms, max: 32.000000ms 19:44:39 [NOTICE] Inference avg: 29.400000ms, min: 24.000000ms, max: 32.000000ms 19:44:41 [NOTICE] Inference avg: 28.600000ms, min: 27.000000ms, max: 30.000000ms 19:44:43 [NOTICE] Inference avg: 30.300000ms, min: 28.000000ms, max: 32.000000ms 19:44:45 [NOTICE] Inference avg: 27.200000ms, min: 22.000000ms, max: 31.000000ms 19:44:47 [NOTICE] Inference avg: 28.400000ms, min: 22.000000ms, max: 32.000000ms 19:44:49 [NOTICE] Inference avg: 29.300000ms, min: 24.000000ms, max: 31.000000ms 19:44:51 [NOTICE] Inference avg: 28.900000ms, min: 24.000000ms, max: 32.000000ms 19:44:53 [NOTICE] Inference avg: 29.800000ms, min: 27.000000ms, max: 33.000000ms 19:44:55 [NOTICE] Inference avg: 28.500000ms, min: 23.000000ms, max: 32.000000ms 19:44:57 [NOTICE] Inference avg: 28.800000ms, min: 23.000000ms, max: 32.000000ms 19:44:59 [NOTICE] Inference avg: 30.100000ms, min: 24.000000ms, max: 34.000000ms 19:45:01 [NOTICE] Inference avg: 29.200000ms, min: 22.000000ms, max: 31.000000ms 19:45:03 [NOTICE] Inference avg: 29.300000ms, min: 27.000000ms, max: 31.000000ms 19:45:05 [NOTICE] Inference avg: 27.400000ms, min: 21.000000ms, max: 31.000000ms 19:45:07 [NOTICE] Inference avg: 28.600000ms, min: 23.000000ms, max: 32.000000ms 19:45:09 [NOTICE] Inference avg: 30.200000ms, min: 27.000000ms, max: 33.000000ms 19:45:11 [NOTICE] Inference avg: 28.900000ms, min: 24.000000ms, max: 33.000000ms 19:45:13 [NOTICE] Inference avg: 29.800000ms, min: 29.000000ms, max: 31.000000ms 19:45:15 [NOTICE] Inference avg: 29.000000ms, min: 22.000000ms, max: 31.000000ms 19:45:17 [NOTICE] Inference avg: 28.100000ms, min: 23.000000ms, max: 31.000000ms 19:46:18 [ERROR] Error running objectPredict: read tcp 172.17.0.3:49130->192.168.0.39:8555: i/o timeout
Got connection from ('192.168.0.39', 61609) Closing connection Got connection from ('192.168.0.39', 61672) No objects detected Closing connection
The other NVR I have running as my main at the moment also leverages CoreML. It's running on 10 cam streams. You think this is all just a congestion thing?
It sure looks like either a network disconnect or python adapter issue. I will add logging to the script. But the full proper solution thats coming next week will be coremp golang yolo model that will be baked into the firescrew, so no more adapter! I will retain network functionality as an option, it will be faster, more reliable and you wont have to use docker, as long as you got ffmpeg installed.
Yea, new implementation sounds great. Less overhead is always nice. Thanks for your work on this. A go based NVR has a bright future. Most stuff out there now is based on all web tech. Except Blueiris but windows only and the ui is meh.
Thanks, ill keep you posted
Greetings, in the meantime i have added more logging to the coreML adapter objectDetectServerCoreML.py
, you can grab latest copy from the repo.
Native coreml support should be coming in the next week or two and Cuda support will hopefully be added in the coming weeks as well.
Sounds good, it's up and running now. Will report back.
Ok, got it to happen again.
object detection: 2023-08-21 13:10:36 - Resizing image... 2023-08-21 13:10:36 - Running model... 2023-08-21 13:10:36 - Extracting results... 2023-08-21 13:10:36 - Preparing response 2023-08-21 13:10:36 - Sending response 2023-08-21 13:10:36 - Reading client data... 2023-08-21 13:10:36 - Resizing image... 2023-08-21 13:10:36 - Running model... 2023-08-21 13:10:36 - Extracting results... 2023-08-21 13:10:36 - Preparing response 2023-08-21 13:10:36 - Sending response 2023-08-21 13:10:36 - Reading client data... 2023-08-21 13:10:36 - Resizing image... 2023-08-21 13:10:36 - Running model... 2023-08-21 13:10:36 - Extracting results... 2023-08-21 13:10:36 - No objects detected 2023-08-21 13:10:36 - Reading client data... 2023-08-21 13:11:36 - Closing connection
firescrew: 17:10:30 [NOTICE] Inference avg: 25.000000ms, min: 22.000000ms, max: 30.000000ms 17:10:30 [NOTICE] Inference avg: 27.300000ms, min: 23.000000ms, max: 31.000000ms 17:10:31 [NOTICE] Inference avg: 26.100000ms, min: 22.000000ms, max: 30.000000ms 17:10:32 [NOTICE] Inference avg: 26.200000ms, min: 21.000000ms, max: 30.000000ms 17:10:33 [NOTICE] Inference avg: 27.400000ms, min: 22.000000ms, max: 39.000000ms 17:10:34 [NOTICE] Inference avg: 28.700000ms, min: 24.000000ms, max: 36.000000ms 17:10:34 [NOTICE] Inference avg: 29.800000ms, min: 23.000000ms, max: 40.000000ms 17:10:35 [NOTICE] Inference avg: 27.500000ms, min: 23.000000ms, max: 33.000000ms 17:10:36 [NOTICE] Inference avg: 27.600000ms, min: 23.000000ms, max: 31.000000ms 17:11:36 [ERROR] Error running objectPredict: read tcp 172.17.0.3:38510->192.168.0.39:8555: i/o timeout
Thanks for that, i'll get back to you
Hey, I just added onnx support with embedded models, meaning it should run much better and no more python! At the moment its only an osx arm64 binary that ready https://github.com/8ff/firescrew/releases/download/onnx_test_release/firescrew.darwin.arm64
You will need to add these 2 params in motion section
"onnxModel": "yolov8n",
"onnxEnableCoreMl": true,
"motion": {
"confidenceMinThreshold": 0.3,
"lookForClasses": ["car", "truck", "person", "bicycle", "motorcycle", "bus", "cat", "dog", "boat"],
"onnxModel": "yolov8n",
"onnxEnableCoreMl": true,
"embeddedObjectScript": "objectDetectServerYolo.py",
"networkObjectDetectServer": "",
"prebufferSeconds": 5,
"eventGap": 10
},
yolov8n model is the quickest and yolov8m is the most accurate and yolov8s is in between
Oh hells yea, that's exciting. Will spin it up as soon as I get back to my desk. Will update you soon.
So I tried running locally on mac mini m1 but it's having trouble finding ffmpeg:
21:58:30 [ERROR] Unable to find ffmpeg/ffprobe binaries. Please install them
I verified it's installed and in my path, I installed via BREW. Any thoughts?
Can you please run these 2 commands in same terminal
which ffmpeg
which ffprobe
Sorry disregard, I forgot to restart my zsh session lol. It's running now. But now I'm having trouble running the web server
22:29:32 [ERROR] Error reading config file: open -s: no such file or directory
./firescrew -s /Users/shamirwehbe/Downloads/firescrew/media :8080
probably messing this up ^
Also added clearer error message in the release here: https://github.com/8ff/firescrew/releases/download/onnx_test_release/firescrew.darwin.arm64
Ah looks like there was a bug there, please try again with this release: https://github.com/8ff/firescrew/releases/download/onnx_test_release/firescrew.darwin.arm64
Ok, perfect! Server and objectdetection both running locally. Now I wait for someone to drive or walk by lol. Will let you know how it goes.
Thanks alot for checking it out!
I can't seem to get object detection to work. I don't need to run objectDetectServerCoreML.py separately since it's embedded now right?
Can you please paste the logs and the relevant config that you use
{
"cameraName": "Driveway",
"deviceUrl": "rtsp://admin:PASSWORD@192.168.0.53:554/h264Preview_01_sub",
"hiResDeviceUrl": "rtsp://admin:PASSWORD@192.168.0.53:554/h265Preview_01_main",
"printDebug": false,
"video": {
"hiResPath": "/media",
"recodeTsToMp4": true,
"onlyRemuxMp4": true
},
"motion": {
"confidenceMinThreshold": 0.3,
"lookForClasses": [
"car",
"truck",
"person",
"bicycle",
"motorcycle",
"bus",
"cat",
"dog",
"boat"
],
"onnxModel": "yolov8n",
"onnxEnableCoreMl": true,
"embeddedObjectScript": "objectDetectServerCoreML.py",
"networkObjectDetectServer": "",
"prebufferSeconds": 5,
"eventGap": 10
},
"pixelMotionAreaThreshold": 50.0,
"objectCenterMovementThreshold": 50.0,
"objectAreaThreshold": 500.0,
"streamDrawIgnoredAreas": true,
"enableOutputStream": false,
"outputStreamAddr": ":8040",
"events": {
"webhookUrl": "",
"scriptPath": "",
"slack": {
"url": ""
},
"mqtt": {
"host": "",
"port": 0,
"user": "",
"password": "",
"topic": ""
}
}
}
This is all that's present in the terminal window:
shamirwehbe@macmini firescrew % ./firescrew config.json 08:00:58 [INFO] **** CONFIG **** 08:00:58 [INFO] Print Debug: false 08:00:58 [INFO] Device URL: rtsp://admin:PASSWORD@192.168.0.53:554/h264Preview_01_sub 08:00:58 [INFO] Lo-Res Param Bypass: Res: 0x0 FPS: 0.00 08:00:58 [INFO] Hi-Res Param Bypass: Res: 0x0 FPS: 0.00 08:00:58 [INFO] Hi-Res Device URL: rtsp://admin:PASSWORD@192.168.0.53:554/h265Preview_01_main 08:00:58 [INFO] Video HiResPath: /media 08:00:58 [INFO] Video RecodeTsToMp4: true 08:00:58 [INFO] Video OnlyRemuxMp4: true 08:00:58 [INFO] Motion OnnxModel: yolov8n 08:00:58 [INFO] Motion OnnxEnableCoreMl: true 08:00:58 [INFO] Motion Embedded Object Script: objectDetectServerCoreML.py 08:00:58 [INFO] Motion Object Min Threshold: 0.300000 08:00:58 [INFO] Motion LookForClasses: [car truck person bicycle motorcycle bus cat dog boat] 08:00:58 [INFO] Motion Network Object Detect Server: 08:00:58 [INFO] Motion PrebufferSeconds: 5 08:00:58 [INFO] Motion EventGap: 10 08:00:58 [INFO] Pixel Motion Area Threshold: 50.000000 08:00:58 [INFO] Object Center Movement Threshold: 50.000000 08:00:58 [INFO] Object Area Threshold: 500.000000 08:00:58 [INFO] Ignore Areas Classes: 08:00:58 [INFO] Draw Ignored Areas: true 08:00:58 [INFO] Enable Output Stream: false 08:00:58 [INFO] Output Stream Address: :8040 08:00:58 [INFO] EVENTS CONFIG 08:00:58 [INFO] Events MQTT Host: 08:00:58 [INFO] Events MQTT Port: 0 08:00:58 [INFO] Events MQTT Topic: 08:00:58 [INFO] Events Slack URL: 08:00:58 [INFO] Events Script Path: 08:00:58 [INFO] Events Webhook URL: 08:00:58 [INFO] **** 08:01:00 [WARNING] OnlyRemuxMp4 is enabled but the stream codec is not h264 or h265. Your videos may not play in WebUI. Codec: hevc 08:01:05 [INFO] **** STREAM INFO **** 08:01:05 [INFO] Lo-Res Stream Resolution: 640x360 FPS: 24.92 08:01:05 [INFO] Hi-Res Stream Resolution: 3840x2160 FPS: 24.92 08:01:05 [INFO] *****
PASSWORD > is just placeholder for my actual password.
This is occurring while Running YOLOV8s with CoreML.