Open xmfcx opened 3 months ago
In last situation we decide won't use autodistill anymore because it don't have any additional things from original DINO and SAM for us.
Instead of this, we used Grounding DINO and SAM from original repositories and we added a image classification method OpenCLIP to validate Grounding DINO results. The working scheme is as follows:
Project Link: https://github.com/leo-drive/rosbag2_anonymizer
Additional of these things, we want to add one more validation part. The new validation method will check whether some objects should be inside other objects or not. For example, a license plate should be inside of the car but should not be inside of human.
Also I will add detailed outputs, for now you can check this bag file which anonymized with our tool:
@StepTurtle could you test https://github.com/knzo25/rosbag2_language_sam with the same data and compare them?
cc. @knzo25
I'm expecting the comparison to be a playback of the anonymized rosbag camera image which as a video shared here.
@StepTurtle could you test https://github.com/knzo25/rosbag2_language_sam with the same data and compare them?
cc. @knzo25
I'm expecting the comparison to be a playback of the anonymized rosbag camera image which as a video shared here.
Here is the results:
In the video:
Left rqt window shows this tool: https://github.com/leo-drive/rosbag2_anonymizer
license plate
human faces
Right rqt window shows this tool: https://github.com/knzo25/rosbag2_language_sam
license plate
cars
Additionally, a validation component has been added to https://github.com/leo-drive/rosbag2_anonymizer to verify the object positions. You can view the results here:
Do you have any ideas or suggestions on what we can do in the upcoming stages?
I can read the text, blur is not enough.
There are so many places where the plates are not blurred well enough.
What happens if you look for license plates with low score threshold and if the plate is inside the vehicle for validation?
car
bus
truck
minibus
motorcycle
trailer
utility vehicle
tractor
golf cart
semi-truck
moped
scooter
license plate
person
child
human face
@xmfcx
I can read the text, blur is not enough.
There are so many places where the plates are not blurred well enough.
I changed the blur parameters, I guess it is okey now.
For this question following schema could be helpful
The first step of validation involves running OpenClip. OpenClip will return results similar to the following:
- Assuming you have input prompts such as: ["license plate", "car", "face"]
- The output will look like this: [0.95, 0.4, 0.1]
If the score for the corresponding label is greater than 0.9, it will be selected as valid.
In the second validation step, we verify whether the label is inside of the parent. If it resides within one of the parent categories, it must satisfy one of the following conditions:
- Is the score for the corresponding label the highest among the scores?
- Is the score greater than 0.3?
> What happens if you look for license plates with low score threshold and if the plate is inside the vehicle for validation?
For your example, license plate must have score greater than 0.3 or the highest score for the corresponding label the highest among the scores.
photo attribution from unsplash
My problem is with the false negatives, also known as, missed detections.
Does your proposal reduce FNs?
When we implemented this proposal, it didn't have a direct impact on FNs, but it allowed us to lower DINO threshold.
By reducing DINO threshold, we're able to detect more objects, including some that were previously classified as FNs. Also reducing DINO threshold will return a lot of FP and we aim to determine these FPs with proposal
@StepTurtle We can put the repository under AWF GitHub organization. Please make sure that you are not violating the license term of all the codes/models that you used.
@StepTurtle We can put the repository under AWF GitHub organization. Please make sure that you are not violating the license term of all the codes/models that you used.
@mitsudome-r @xmfcx we forked repository couple time ago.
But currently, I don't have write access. Could you give me a access to this repository? I believe I can create PRs, but I would prefer to push directly to the main branch as there might not be anyone to review for now. If this isn't acceptable, I'll create a PR whenever I need to update the code.
I am sharing the videos which shows the current results:
After labeling and training YOLOv8, we combined YOLOv8 and DINO to find bounding boxes and results improved.
Hi @xmfcx,
The tool have usage instructions in the project README. Should we also add a user guideline for the tools in the Autoware documentation. And instruction for how to publish new public dataset with Autoware community.
@StepTurtle under here: https://autowarefoundation.github.io/autoware-documentation/main/datasets/
it would be nice to have a separate page, dedicated to data anonymization.
@mitsudome-r will find someone to test this tool.
Checklist
Description
The Autoware Foundation seeks to develop a tool that anonymizes camera data within rosbags, specifically targeting the blurring of faces and license plates to maintain privacy. This initiative aims to enable the secure sharing of rosbags containing camera data amongst member companies and the wider community.
Purpose
The primary goal is to ensure the privacy of individuals captured in camera data shared within the Autoware ecosystem. By creating a tool that can anonymize sensitive information in rosbags, we facilitate a safer, privacy-compliant exchange of data that can be used for research, development, and testing of autonomous vehicle technologies.
Possible approaches
Definition of done