Closed gadget-man closed 8 months ago
When I get home, I'll give it a try and report back.
@gadget-man So I just got done running a bunch of different clips through. Day and Night(Front and back of vehicles). Roughly 30 different events and ONE plate has been missed. The new strategy seems to be working really well!
That's positive. It would be great to understand what sort of load it's taking on the CP.AI before it finds a valid plate. Can you check the logs from those 30 events - what sort of numbers are you seeing reported as the number of calls to the AI Engine within the Clearing event: X after X calls to AI engine
log entry. Also, for the one that was missed, could you share the debug code so I can try and work out what went wrong and if it could be corrected?
That's positive. It would be great to understand what sort of load it's taking on the CP.AI before it finds a valid plate. Can you check the logs from those 30 events - what sort of numbers are you seeing reported as the number of calls to the AI Engine within the
Clearing event: X after X calls to AI engine
log entry. Also, for the one that was missed, could you share the debug code so I can try and work out what went wrong and if it could be corrected?
I extracted most of the events. The log is here: https://gist.github.com/kyle4269/bee4e9694525a4cc79126671a26c0ee4 The event that it didn't find a plate for was 1706404447.427009-cae70x How this all helps!
Ok, so if you filter for all the lines starting ‘CLEARING EVENT’, you’ll see that most of the time the plate is recognised after just a few attempts, but there are quite a few occasions where it takes 15-20 attempts. However, for the event that fails, the script makes a call to CP.AI over 60 times. I’m not sure if you have to pay for credits on CP.AI, but if you were using plate_recognizer, this would be a pretty big chunk of credits used for what ultimately was an unsuccessful recognition. For users with regular vehicle traffic, this could be a big problem.
I think for your situation, the reason you have so many instances where the plate is recognised eventually, but only after 15-20 attempts might be because the plate is too small/not clear enough to start with. Based on the box size of a successful recognition, I would consider adding a minimum size to the license_plate attribute in Frigate config, so that it starts the recognition process a little later. Alternatively/as well, I’d check the license_plate_min_score to see if it would benefit from tweaking. If the number of CP.AI calls isn’t a concern, you might not bother.
To solve the problem of using loads of credits for a Frigate+ based recognition, I think I need to look again at whether the original ‘top_score’ test gives a more reliable test of whether the frame should be ignored. If not, perhaps we should add a config entry to optionally limit the number of attempts per event, I.e. if the script tries 20 times and a plate still hasn’t been recognised, give up!
Ok, so if you filter for all the lines starting ‘CLEARING EVENT’, you’ll see that most of the time the plate is recognised after just a few attempts, but there are quite a few occasions where it takes 15-20 attempts. However, for the event that fails, the script makes a call to CP.AI over 60 times. I’m not sure if you have to pay for credits on CP.AI, but if you were using plate_recognizer, this would be a pretty big chunk of credits used for what ultimately was an unsuccessful recognition. For users with regular vehicle traffic, this could be a big problem.
CP.AI doesn't have a limit of credits, the only thing it does is when it gets too many requests at once, it will refuse the requests. I posted an example of that a few comments back. I think for your situation, the reason you have so many instances where the plate is recognised eventually, but only after 15-20 attempts might be because the plate is too small/not clear enough to start with. Based on the box size of a successful recognition, I would consider adding a minimum size to the license_plate attribute in Frigate config, so that it starts the recognition process a little later. Alternatively/as well, I’d check the license_plate_min_score to see if it would benefit from tweaking. If the number of CP.AI calls isn’t a concern, you might not bother.
Currently the license_plate min_score is 0.5, which now that I'm think of it, is probably why its sending so many requests. When I get to work today, I'm going to enable bounding boxes again and see what the scores look like. Then increase that min_score. To solve the problem of using loads of credits for a Frigate+ based recognition, I think I need to look again at whether the original ‘top_score’ test gives a more reliable test of whether the frame should be ignored. If not, perhaps we should add a config entry to optionally limit the number of attempts per event, I.e. if the script tries 20 times and a plate still hasn’t been recognised, give up!
I think that would be a good idea, especially for the Plate Recognizer users. On a regular street, like you said, the credits could be eaten up pretty quickly.
I've added another debug line to the existing code (no other changes). This should help us to understand whether there is any correlation between the main `after_data[top_score]' and the likelihood of a successful plate recognition (I suspect not). It would be really helpful if you could run several plates through it and share the gist again - this is really helpful as your results are quite different to my own.
I also just had an Amazon van drive up to my drive and stop - the angle of the van meant that Frigage+ recognized a numberplate, but it was at such an angle that the AI was never going to be able to get a plate from it. As a result, the script burned through 150 platerecognizer credits trying to detect an impossibility!
I changed my min_score in frigate and Plate Recognizer to 0.7 and no plate has been detected at all.
frigate_plate_test | 2024-01-28 12:24:45,145 - __main__ - DEBUG - Getting snapshot for event: 1706462676.194771-43uwga, Crop: True
frigate_plate_test | 2024-01-28 12:24:45,145 - __main__ - DEBUG - event URL: http://192.168.1.32:5000/api/events/1706462676.194771-43uwga/snapshot.jpg
frigate_plate_test | 2024-01-28 12:24:45,192 - __main__ - DEBUG - Getting plate for event: 1706462676.194771-43uwga
frigate_plate_test | 2024-01-28 12:24:45,192 - __main__ - DEBUG - Maximum number of AI attempts reached for event 50: 0
frigate_plate_test | 2024-01-28 12:24:45,208 - __main__ - DEBUG - no license_plate attribute found in event attributes
Sorry, sloppy coding. Updated now. Please can you set them back to what they were and try again for testing purposes?
ok! want me to include the max_attempts: 20 in my config?
No, please don't change config at the moment.
ok, with your latest update and both configs the same. I'm seeing anywhere from 4 calls to 15. Want me to let it go for say 10 events and post the logs?
Yes please that would be great. Please can you do it as a gist again?
Here is the latest. https://gist.github.com/kyle4269/c550019f4046896dbb8289e6875cba5a Headed to work, so give me an hour or so to start testing again
OK that's great. It's clear from these results that there's no correlation between the top_score
attribute and whether the plate is likely to be detected. In that case, I think the max_attempts
is the route to go. I'd recommend you set yours to around 20. Please also now try tweaking the min_scores to see if you can reduce how often in needs 10-15 attempts to get a successful plate.
OK that's great. It's clear from these results that there's no correlation between the
top_score
attribute and whether the plate is likely to be detected. In that case, I think themax_attempts
is the route to go. I'd recommend you set yours to around 20. Please also now try tweaking the min_scores to see if you can reduce how often in needs 10-15 attempts to get a successful plate.
After some tweaking min_score, my daytime API calls have dropped to no more than 4. My night plates are averaging 15 API calls. I have my min score set to 0.75 and I looked back at all the night images and the majority of the bonding boxes are reporting 73-80%.
I've managed to reduce my night API calls down to 10 by adjusting the min_score for a vehicle also. I think the program is working as expected. This is strictly a ME issue.
Resolved as part of #32
I think there's an issue with the way the script currently checks the top_score, here: https://github.com/ljmerza/frigate_plate_recognizer/blob/1782e4638b35a2291fe141bf809551d3f6b0e0fa/index.py#L211
I've noticed several events that never get passed to plate_recognizer. This happens when the top_score if flagged by Frigate BEFORE the object enters a zone being monitored by frigate_plate_recognizer. In my situation, I detect cars driving down a road, but only want plate detection when they pull into my drive. Sometimes, the best score from Frigate will be as the car slows down to turn into the drive. The implication of this is when frigate_plate_recognizer gets the first MQTT message that passes the first few
check_invalid_event
tests (matching zone, matching camera), frigate is already reporting the top_score, and so the very first valid mqtt event then fails thetop_score
test.I hadn't noticed this before because in 1.7.X, configurations with
frigate_plus
skipped this test.I think there should be an additional test so that, even if
before_data['top_score'] == after_data['top_score']
, if it's the first time this event_id has been processed, it should still be passed toget_plate
. I'm not sure how best to handle this, otherwise I'd have prosed a PR!