Closed joshinils closed 2 years ago
i think the problem is that the blurred frame gets blurred again for the next detection, and since the frame_memory uses previous detections again this always blurs the same area twice or thrice where they overlap.
i think merging all detections per frame could be better, but that means having some non-continuous shape as input to be blurred.
in an image processing application, i would create a layer with a copy of the original image which is blurred as background. then cut out the faces and plates and thus have holes in the original where i can see the blurred background.
how does the code involving the cv2 functions work? i.e. : https://github.com/tfaehse/DashcamCleaner/blob/f2151bf5e4b6c5845de30f6637d1ec3a49b05091/dashcamcleaner/src/blurrer.py#L64-L94
see https://github.com/tfaehse/DashcamCleaner/pull/35#issuecomment-1208836675 for the difference before and after
(using the fixed frame memory code from after #33)
examples with each the same frame: frame_memory == 0 on the left, frame_memory == 1 in the middle, frame_memory == 2 on the right
there is some kind of hard or obvious border that aligns with the pixel grid
same happens with plates: