Closed hunterjm closed 2 months ago
one thing to be noted here is that the UI is (in progress) being rewritten for 0.14, so the events page won't exist as it is today.
one thing to be noted here is that the UI is (in progress) being rewritten for 0.14, so the events page won't exist as it is today.
0.13 isn't even out and you're planning that far ahead! I'd love to work with you on envisioning what the UX could be. I don't pretend to assert frontend being a strength of mine.
to be clear it is a collaborative effort between myself, Blake, and @hawkeye217
there may be some outside help on the UX & design side as well
It was the collective "you". I'll make sure I base my work off of the 0.14 branch moving forward and if I implement any POC I'll do it in web-new
.
I think what you have laid out here is a strong backend foundation. I'm sure there are some good use cases here even if we haven't come up with them yet. The best way to figure them out is to start experimenting.
We've got progress! I've got the backend mostly hooked up locally now. Still need to add the Search API and frontend components. Embedding models are relatively fast on my dev machine. Will need to test still on embedded devices.
[2023-12-16 07:24:50] frigate.chromadb INFO : Embedded thumbnail for 1702729481.753491-7a7xb6 on office in 0.0683 seconds
[2023-12-16 07:24:55] frigate.chromadb INFO : Generated description for 1702729481.753491-7a7xb6 on office in 5.6522 seconds: A man with long hair and a beard is sitting in a chair looking to the right
[2023-12-16 07:24:56] frigate.chromadb INFO : Embedded description for 1702729481.753491-7a7xb6 on office in 0.2163 seconds
Chroma was a bit harder than I wanted. It requires a version of SQLLite that Python 3.9 and Debian Bullseye don't provide. I went down a path of trying to upgrade the docker image, but there is A LOT involved in that. I found a solution that lets us use a pre-compiled binary and replace it in the Python import runtime.
Who will be able to benefit from it?
Users of the default template?
Coral users?
frigate+ users?
anyone could use this regardless of what model / hardware they are using for object detection
Great !!!
is this still work in progress? can i use this in my friagte installation today?
You can see the PR for it, there is no UI for it on 0.14 currently but the previous PR based on 0.13 could be used
Thanks @NickM-27 ...how do i upgrade my version to the PR version? I am running Frigate in Docker. My system version shows : 0.13.2-6476F8A...appreciate your help
You'd need to build it yourself.
Semantic Search for Events
When I started working on #8959, I was initially looking at making it near-real-time to include additional labels and descriptions that could be send along with the MQTT messages, but the latency in LLMs prevents that. The PR then shifted to processing descriptions for events after they ended.
My interest has shifted towards adding vector similarity search capability to Frigate. The end result would be having a search box on the events page that would be able to take in free form text. @blakeblackshear gave some good examples in an earlier comment:
I've identified a couple new dependencies and thought through a potential implementation that I'd like some feedback on.
Suggested Default Embeddings Model:
CLIP ViT-B/32 or ResNet50 (need to test speed/performance) CLIP is selected because it is a multi-modal embeddings model that embeds images and text into the same embedding space. It was trained on public image/caption pairs so can be used directly on thumbnails of finished events and return results based on text queries.
I found an ONNX conversion for CLIP that will allow us to run this model with just the onnxruntime dependency. I plan to use
ViT-B/32 Models Image Model Text Model
ResNet50 Models Image Model Text Model
New Core Dependencies
onnxruntime Used to generate embeddings from CLIP
ChromaDB An open source embeddings database which allows you to store metadata along with embeddings, and do vector similarity searches. It supports multiple embeddings models, and allows you to implement your own embedding functions (more on this later). Unlike many other options, the base installation is relatively lightweight without a huge dependency tree. Most notably, it does not install Torch or Torch Vision out of the box, though can support models that require it such as sentence-transformers. We could potentially offer a container that has those dependencies and allows users to self-select an embeddings model.
Google Gemini
Gemini will be able to provide a more detailed description than will undoubtably outperform CLIP's embeddings. Currently only images are supported in multi-modal prompts, but Video is soon to be released. Initially, I would like to generate descriptions off of the thumbnail and use ChromaDB's built in ONNX sentence embeddings encoder based off of the all-MiniLM-L6-v2 model. Google Gemini also provides an API for it's own embeddings generator, but using this model will allow us to support multiple methods of adding descriptions to events. (API, Third Party Integrations, etc)
Next Steps
POST /api/events/<id>/description
allowing external systems creating descriptionssemantic_search
andgemini
.reindex_events
file in the/config
directory. It takes a while though... this is on a 12 core CPU:Embedded 10987 thumbnails and 851 descriptions in 730.0870163440704 seconds
/events
endpoint to allow for similarity search as wellThanks in advance for any feedback or thoughts.