Closed steemsjo closed 2 years ago
Hi @steemsjo,
there are several reasons why it is hard to have a single version that supports all CPUs:
Also, newer CPUs would miss some important optimizations that could use SSE 4.2.
That's why I'm proposing a variant of the add-on where everything is compiled on the target system.
One benefit is that the compiled libraries are optimized for the CPU of the target system, old or new.
Maybe that variant should not be named frigate-oldcpu
but frigate-optimized-for-target-cpu
or frigate-with-deps-compiled-from-source
or something, as that may even provide better optimized binaries for newer CPUs too.
Thanks for the fast reply. It seems I skipped that link recently, but I remember I was looking at it weeks ago. It's nice to see you got something working. My knowledge in Python and others is non-existing so I'm a bit depending on people like you who fine the time to do this, so my sincere thanks for that.
Now, it seems you got it working for the add-on (which I used for some weeks). But does this solution also apply to standalone docker container using docker-compose? Imo the integration in Hass are nice, but somehow I still like Frigate, Deepstack, MQTT etc... coupled loose from it and running it in separate docker instances.
But does this solution also apply to standalone docker container using docker-compose?
It should. Just clone my repo and build the docker image locally then reference it in your docker-compose configuration, e.g.:
git clone https://github.com/pdecat/frigate-hass-addons.git frigate-hass-addons-pdecat
cd frigate-hass-addons-pdecat
git checkout oldcpu
cd frigate_oldcpu
docker build . -t pdecat/frigate:0.9.4-amd64-oldcpu
Then use pdecat/frigate:0.9.4-amd64-oldcpu
as the image name in your configuration.
Worked fine with Q6600 on an Intel DG31RP mobo (Debian 11) after I added a 32GB swap.
So, thank you! Very much
Amazing! I installed on a Intel(R) Atom(TM) CPU D525 @ 1.80GH (2011) I added 32Gb of swap memory and works! takes around 6 hours or more, but finally I have frigate working again! To not colapse CPU I use the command with option --cpu-quota
docker build . --cpu-quota=300000 -t pdecat/frigate:0.9.4-amd64-oldcpu
Have the same problem and error message as steemsjo running on AMD 6174, (no AVX support) guess the key line is /usr/lib/python3/dist-packages/tflite_runtime/interpreter.py will be migrating to AVX supported AMDs
but im a little perplexed since AMD Buldozer (6200 series) and Abu Dhabi (6300 series) both have full AVX support so steemsjo s 6378 and 6234 Opterons should both support AVX (6378 even AVX 1.1).
Is only AVX support necessary or is AVX2, AVX-512 or even AVX-VNNI necessary ? anybody knows?
Turns out the issue may not be just with AVX instructions, but more generally with all CPU instructions mandated by SSE4.2.
What my PR notably does is to remove the -msse4.2
flag that is hard-coded before the compilation of the libedgetpu library. This allows to optimize it and use the CPU instructions that are actually available on the target system.
Well, I made a mistake. I was apparently running on 2x AMD Opteron 6174 12-core 2.2 GHz which don't have AVX. I have a Supermicro 4 node server, and each node has different CPU's. I switched all over to the node with 2x AMD Opteron 6378 16-core 2.4 GHz which has AVX and since then it is working 100% perfect. Since then bought myself 2x AMD Opteron 6380 16-core 2.5 GHz and replaced the 6174 and that's also working as expected, which makes sense as the 6380 are just faster version of the 6378.
So basically, if your motherboard supports the 6380, buy that one. Going for around $35 on eBay which is cheap for the performance they deliver. Got Frigate configured on 4 cores and averaging around 295 interference speed.
And like pdecat is posting as I'm typing this: it's also the SSE4.2 instruction that is important. The 6174 does only have SSE4a. The newer and faster 6380 has SSE4a, SSE4.1and SSE4.2.
grate thx. for the clarification!! I just snapped a couple of 6278 so they should work
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I apologise in advance if I am doing something stupid. I am new to Docker and have limited programming skills.
I am trying to compile a Frigate Docker container as per the above instructions from @pdecat for an old Phenom II based machine that I have which is running Ubuntu 20.04. . The first time I tried I successfully compiled the Frigate Docker container but after a while it stopped recording video so I decided to rebuild the container. However, I have tried a dozen times and it fails to rebuild.
I enclose a file with the Docker build terminal output. As far as I can see there appear to be compiler errors but I don't know why.
I would really appreciate any help error.txt .
It should. Just clone my repo and build the docker image locally then reference it in your docker-compose configuration, e.g.:
git clone https://github.com/pdecat/frigate-hass-addons.git frigate-hass-addons-pdecat cd frigate-hass-addons-pdecat git checkout oldcpu cd frigate_oldcpu docker build . -t pdecat/frigate:0.9.4-amd64-oldcpu
Then use
pdecat/frigate:0.9.4-amd64-oldcpu
as the image name in your configuration.
Thank you !
This allows me to use Frigate on an old Proliant N54L with 4go RAM. Works without any modification (just change the name of the build as at the time of writing the Frigate version that is used is 0.10.0)
When I build the docker image locally, I get the following error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. pycoral 2.0.0 requires tflite-runtime==2.5.0.post1, but you have tflite-runtime 2.5.1 which is incompatible.
If I try running the container, it keeps crashing. Not really sure how to resolve this. Anyone else have this issue?
But does this solution also apply to standalone docker container using docker-compose?
It should. Just clone my repo and build the docker image locally then reference it in your docker-compose configuration, e.g.:
git clone https://github.com/pdecat/frigate-hass-addons.git frigate-hass-addons-pdecat cd frigate-hass-addons-pdecat git checkout oldcpu cd frigate_oldcpu docker build . -t pdecat/frigate:0.9.4-amd64-oldcpu
Then use
pdecat/frigate:0.9.4-amd64-oldcpu
as the image name in your configuration.
desculpe minha falta de compreensão, mas como fazer isso? "Then use pdecat/frigate:0.9.4-amd64-oldcpu
as the image name in your configuration."
When I build the docker image locally, I get the following error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. pycoral 2.0.0 requires tflite-runtime==2.5.0.post1, but you have tflite-runtime 2.5.1 which is incompatible.
If I try running the container, it keeps crashing. Not really sure how to resolve this. Anyone else have this issue?
I'm also hitting this. @pdecat Any idea how to resolve this dependency conflict? I actually don't even need pycoral as I'm going to be running a tensorflow setup with a GTX 960
Also, cheers for doing the legwork to get older CPU's working. It's greatly appreciated
~There's no requirement for this if you're running the tensorrt setup (and it would be a separate container anyway)~
When running the trt-models setup
docker run --gpus=all -e USE_FP16=False --rm -it -v
pwd/trt-models:/tensorrt_models -v
pwd/ten sorrt_models.sh:/tensorrt_models.sh nvcr.io/nvidia/tensorrt:22.07-py3 /tensorrt_models.sh
as stated here:
https://docs.frigate.video/configuration/detectors#nvidia-tensorrt-detector
I get the error:
ERROR: This container was built for CPUs supporting at least the AVX instruction set, but the CPU detected was AMD Phenom(tm) II X6 1100T Processor, which does not report support for AVX. An Illegal Instrution exception at runtime is likely to result. See https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX .
How do you run standard frigate with tensorrt-setup without AVX-support?
I was incorrect, AVX instructions are required no matter what version of frigate or detector is used
I got frigate working on my late 2011 AMD Athlon II X4 651K (Lacks AVX) with tensorrt running on a nvidia 1660S a year ago. Unfortunately I had to wipe my setup, now I'm running into this same error and I can't remember how I got around generating / pre-processing the tft models without AVX. I know it's possible. I will report back if I have any success again. --Edit:
Finally got it! This is definitely not what I did a year ago, but it works. I'm a bit out of my league here but I'll try to explain how I did well enough for others to follow or make sense of. In quick summary,
1: On my old (Athlon) computer I built a compatible (no-AVX) image of 13.2 by downloading and updating @pdecat 's frigate_oldcpu fork in a few spots to use the 13.2 version (assumed this was possible since someone else did it for 13.1 elsewhere, can't find link at the moment). Built that, ran it, and was able to use CPU detection. Enabling tensorrt detector gave a no tensorrt object error.
2: On my main computer, an AMD Ryzen9 7900X & 3090, I replaced the 3090 with my 1660S, did a fresh ubuntu installation, then pulled and ran the same image I built in step 1 with tensorrt detector enabled. This built the trt (yolov7-tin-416.trt) file with no AVX warning errors, everything was working. I copied the entire config folder (containing the .trt file) onto a flash drive for transfer over to the old pc. There was a file linked to the .trt file that wouldn't copy, omitting that turned out to not be a problem.
3:Back on my old computer I plugged the 1660S in again and updated the config folder with the .trt file generated on my other AVX capable computer. Finally I wasn't getting the no tensorrt object error. I got a bunch of others that were easy to work out, but the main issue (generating the trt models without AVX) was fixed.
4: Like I said a bunch of other errors to deal with, most annoyingly had a numpy version error (requires >=1.20; you have 1.19.5) during import matplotlib.pyplot as plt that would only come up with tensorrt enabled. Days of trial and error later I found out that the custom image I built in step 1 was making a numpy folder in .../docker/overlay2/gibberish/something/python3.9/dist-packages/ , two locations with different "gibberish" routes had a numpy folder, one with 1.19.5 and one with 1.23.5. The 1.19.5 was being found first, looks to be associated with the tflite requirements. I wasn't using it for tensorrt so deleting that numpy folder cleared my matplotlib import error and tensorrt was working! Wohoo!
Note: I swear I was able to generate the models on my 1660S with the old v12.x versions (pdecat'd of course), the v13 updates seem to require nvidia containers that require AVX, so there is no way around it except generating the models on a system with an AVX capable CPU. I read that the hardware that generates the model effects the model so I transferred my 1660S over, later I may generate them with my 3090, port them over and see if that messes things up... Current results seem as expected - no perfect but definitely not working improperly. The CPU that generates the trt file shouldn't matter anyway, once the trt file is in use with frigate the CPU only needs to feed frames to the GPU
If anyone wants I can try to clean this summary up later and make it more useful to others as clueless as myself. I've been up too late tonight though!
Found the original problem and with the support of @pdecat it was working. But does it follow the latest updates from blakeblackshear/frigate? Is it not possible to build in a switch in the appropriate files where it compiles tflite to support AVX or not?
I got an (old) server with Opteron 6378 16-core and Opteron 6234 12-core which don't support AVX, so I can't use this now (besides older versions), neither in Home Assistant as add-on, nor as Docker container. Planning on using a dGPU for hardware acceleration, but I can't do that yet because my CPU's block the usage of the software.
The goal would be: just to have an blakeblackshear/frigate docker image that is compatible out-of-the-box with non-AVX CPU's.
Running blakeblackshear/frigate:0.9.4-amd64 gives: