Open DeFlanko opened 2 weeks ago
Ahh is this why?
2024-11-10T21:19:52.228038Z INFO blue_candle: Starting Blue Candle object detection service
2024-11-10T21:19:52.228196Z INFO blue_candle::system_info: CPU | GenuineIntel | 13th Gen Intel(R) Core(TM) i7-13700 | 16 Cores | 24 Logical Cores
2024-11-10T21:19:52.265956Z INFO blue_candle::system_info: Cuda GPU: [0] | NVIDIA GeForce RTX 3070 | CC 8.6 | 5888 Cores | 4947/8192 MB
2024-11-10T21:19:52.267272Z INFO blue_candle::detector: Detector is initialized for GPU
2024-11-10T21:19:52.579108Z INFO blue_candle::detector: Test detection
2024-11-10T21:19:52.733863Z INFO blue_candle: Server inference startup test, processing time: 154.6567ms, inference time: 69.5835ms
2024-11-10T21:19:52.734054Z INFO blue_candle: Starting server, listening on 0.0.0.0:32169
Ok so because CPAI and BC use the same PORT by default -- conflict...
2024-11-10T21:19:52.228038Z INFO blue_candle: Starting Blue Candle object detection service 2024-11-10T21:19:52.228196Z INFO blue_candle::system_info: CPU | GenuineIntel | 13th Gen Intel(R) Core(TM) i7-13700 | 16 Cores | 24 Logical Cores 2024-11-10T21:19:52.265956Z INFO blue_candle::system_info: Cuda GPU: [0] | NVIDIA GeForce RTX 3070 | CC 8.6 | 5888 Cores | 4947/8192 MB 2024-11-10T21:19:52.267272Z INFO blue_candle::detector: Detector is initialized for GPU 2024-11-10T21:19:52.579108Z INFO blue_candle::detector: Test detection 2024-11-10T21:19:52.733863Z INFO blue_candle: Server inference startup test, processing time: 154.6567ms, inference time: 69.5835ms 2024-11-10T21:19:52.734054Z INFO blue_candle: Starting server, listening on 0.0.0.0:32169
Ok so because CPAI and BC use the same PORT by default -- conflict...
@DeFlanko Welcome to Blue Candle, yes if you still run CPAI there will be conflict, or change port as you did. Maybe I should catch that error and print out helpful message about changing port or stop other applications using the same port.
so i decided to stop CPAI for a moment and run BC with default Port 32168 and re analyse some clips i had to see if it just "works"...
C:\Users\defla>"C:\Program Files\Blue Candle (Ai)\blue_candle-0.8.0-win-cuda-12-CC-86\blue_candle.exe" --port 32168
2024-11-10T21:31:26.900511Z INFO blue_candle: Starting Blue Candle object detection service
2024-11-10T21:31:26.900670Z INFO blue_candle::system_info: CPU | GenuineIntel | 13th Gen Intel(R) Core(TM) i7-13700 | 16 Cores | 24 Logical Cores
2024-11-10T21:31:26.938446Z INFO blue_candle::system_info: Cuda GPU: [0] | NVIDIA GeForce RTX 3070 | CC 8.6 | 5888 Cores | 1052/8192 MB
2024-11-10T21:31:26.939806Z INFO blue_candle::detector: Detector is initialized for GPU
2024-11-10T21:31:27.213466Z INFO blue_candle::detector: Test detection
2024-11-10T21:31:27.369697Z INFO blue_candle: Server inference startup test, processing time: 156.0953ms, inference time: 69.7755ms
2024-11-10T21:31:27.369854Z INFO blue_candle: Starting server, listening on 0.0.0.0:32168
2024-11-10T21:32:27.997231Z INFO blue_candle::server_stats: Stats: Total Req: 65, Max Req/Sec: 4, Min Inference : 4.8557ms, Max Inference: 11.884ms, Min Processing: 29.5169ms, Max Processing: 380.1632ms, Min Request: 47.9118ms, Max Request: 399.2897ms
2024-11-10T21:33:35.585710Z INFO blue_candle::server_stats: Stats: Total Req: 155, Max Req/Sec: 5, Min Inference : 4.4041ms, Max Inference: 15.4212ms, Min Processing: 25.4712ms, Max Processing: 380.84ms, Min Request: 31.3857ms, Max Request: 399.2897ms
2024-11-10T21:34:40.948590Z INFO blue_candle::server_stats: Stats: Total Req: 159, Max Req/Sec: 1, Min Inference : 4.4041ms, Max Inference: 15.4212ms, Min Processing: 25.4712ms, Max Processing: 380.84ms, Min Request: 31.3857ms, Max Request: 399.2897ms
2024-11-10T21:35:44.556195Z INFO blue_candle::server_stats: Stats: Total Req: 163, Max Req/Sec: 1, Min Inference : 4.4041ms, Max Inference: 15.4212ms, Min Processing: 25.4712ms, Max Processing: 383.3255ms, Min Request: 31.3857ms, Max Request: 410.9926ms
2024-11-10T21:36:51.715765Z INFO blue_candle::server_stats: Stats: Total Req: 165, Max Req/Sec: 1, Min Inference : 4.4041ms, Max Inference: 15.4212ms, Min Processing: 25.4712ms, Max Processing: 383.3255ms, Min Request: 31.3857ms, Max Request: 410.9926ms
2024-11-10T21:38:03.485893Z INFO blue_candle::server_stats: Stats: Total Req: 199, Max Req/Sec: 4, Min Inference : 4.4041ms, Max Inference: 15.4212ms, Min Processing: 25.4712ms, Max Processing: 401.1368ms, Min Request: 31.3857ms, Max Request: 430.5489ms
I replayed some clips back and noticed the same orange boxes in BI.... Does this mean its working?
if so... how (or can i) use the models that CPAI was using?
i assume no right now....
I added a FAQ here:
https://github.com/xnorpx/blue-candle/discussions/168
There is discussion how to use custom models here, but I have yet to implement it.
Same Issue as reported here: #161
Just downloaded and attempt to run without parameters.
Environment Variables:
Note: CodeProject.AI (CPAI) is still running -- i was hoping to do a momentary flip to Blue Candle (BC) to test it out and then flip back as necessary for a smoother transition for Blue Iris (BI)