nwesem / mtcnn_facenet_cpp_tensorRT

Face Recognition on NVIDIA Jetson (Nano) using TensorRT
GNU General Public License v3.0
203 stars 72 forks source link

Deepstream #30

Open hirwa145 opened 3 years ago

hirwa145 commented 3 years ago

Is there a way i can deploy using Nvidia Deepstream ? Or create Deepstream app from this?

hirwa145 commented 3 years ago

So after getting picke file, i will use test_facenet_trt.pyto train the model with the new created pickle file?

shubham-shahh commented 3 years ago

So after getting picke file, i will use test_facenet_trt.pyto train the model with the new created pickle file?

Creating a pickle file means you have all the embeddings of the face. now all you need to do is, match the embedding received from the deepstream app and the ones in the pickle file and find the match.

hirwa145 commented 3 years ago

You haven't implemented it yet?

shubham-shahh commented 3 years ago

You haven't implemented it yet?

I don't Have a Jetson with me Rn so I'm unable to work on it. sorry.

hirwa145 commented 3 years ago

Understandable, i will try to implement and will consult you if i have any question

shubham-shahh commented 3 years ago

Understandable, i will try to implement and will consult you if i have any question

sure, ill help you with the doubts, and thanks for understanding

hirwa145 commented 3 years ago

What is the cause of flickering bbox during detection?

shubham-shahh commented 3 years ago

What is the cause of flickering bbox during detection?

I am not aware of any flickering, could you please post a gif or short video demonstrating the same.

hirwa145 commented 3 years ago

https://user-images.githubusercontent.com/65422159/113586113-b05ee280-965f-11eb-8886-10d8cb24d3b7.mov

hirwa145 commented 3 years ago

Can see it?

hirwa145 commented 3 years ago

I managed to solve. The cause of flickering was because of the interval=1 in Deepstream app, Which means the detector was skipping some frames. I set it to interval=0. And there was no flickering anymore.

shubham-shahh commented 3 years ago

Can see it?

No, But I am glad you solved it.

shubham-shahh commented 3 years ago

Do i have to use tf version>2?

for what?

hirwa145 commented 3 years ago

For last part of face embeddings, do i have to make new python script?

shubham-shahh commented 3 years ago

For last part of face embeddings, do i have to make new python script?

It's upto you. I'll give you a stepwise guide

First, run the app on a video of a person, save all rhe embeddings it generated in a pickle file.

Please have a look at This tutorial, how he generates the embeddings and compares them

hirwa145 commented 3 years ago

what is the next step after getting embedding

shubham-shahh commented 3 years ago

what is the next step after getting embedding

Did you create the pickle file with list of embeddings?

hirwa145 commented 3 years ago

Yes, I finished it.

hirwa145 commented 3 years ago

Do i have to run test_facenet_trt.py script with soecified location of a pickle file?

shubham-shahh commented 3 years ago

Yes, I finished it.

now all you have to do is compare the embeddings.

hirwa145 commented 3 years ago

The problem is how do i do that?

hirwa145 commented 3 years ago

Do i have to run test_facenet_trt.py script with soecified location of a pickle file

Do i use this?

shubham-shahh commented 3 years ago

Do i have to run test_facenet_trt.py script with soecified location of a pickle file

Do i use this?

not netessary, as it uses MTCNN for first stage

shubham-shahh commented 3 years ago

The problem is how do i do that?

This this tutorial covers that

hirwa145 commented 3 years ago

Mhm, it can be applied same way to the Deepstream facenet app?

shubham-shahh commented 3 years ago

Mhm, it can be applied same way to the Deepstream facenet app?

Yes, the embeddings part.

hirwa145 commented 3 years ago

Amd what about the comparing part?

shubham-shahh commented 3 years ago

Amd what about the comparing part?

It briefly explains the comparing part as well.

hirwa145 commented 3 years ago

It work only with python implementation. Is there a way to make it work with CPP implementation?

hirwa145 commented 3 years ago

In python implementation, which part of code that does output those vectors for face features extracted?

shubham-shahh commented 3 years ago

It work only with python implementation. Is there a way to make it work with CPP implementation?

The logic will remain the same

shubham-shahh commented 3 years ago

In python implementation, which part of code that does output those vectors for face features extracted?

this

hirwa145 commented 3 years ago

I know that is the python code responsible for all facenet actions. I wanted to know which line in that file, that is outputing/producing those vector(embeddings). That will be very helpful

shubham-shahh commented 3 years ago

I know that is the python code responsible for all facenet actions. I wanted to know which line in that file, that is outputing/producing those vector(embeddings). That will be very helpful

Hi, the link mentioned above is the permalink to that line.

hirwa145 commented 3 years ago

How do i count count avg mean and avg std for embeddings? For example, i calculated vector distance between 2 photos of Obama, and i got average of 0.4587.... And when i compare the photo of Obama and elton john or benaffleck photos, i get average of 1.562....

How do i calculTe avg mean avg std from these info?

hirwa145 commented 3 years ago

@shubham-shahh I managed to be able to predict names of the people in the video correctly. But names are displayed in the terminal but not around the bbox, how can i achieve that?

shubham-shahh commented 3 years ago

@shubham-shahh I managed to be able to predict names of the people in the video correctly. But names are displayed in the terminal but not around the bbox, how can i achieve that?

Hi, you need to update Deepstream's bbox function.

hirwa145 commented 3 years ago

You mean in nvdsparsebbox_Yolo.cpp?

shubham-shahh commented 3 years ago

You mean in nvdsparsebbox_Yolo.cpp?

No, if I am not mistaken that is for the bbox from the pgie, so until we get the pgie we don't have the name of the person.

hirwa145 commented 3 years ago

So how to i change the bbox function? I used Python implementation

shubham-shahh commented 3 years ago

So how to i change the bbox function? I used Python implementation

one approach I would use is, to draw on the stream after sgie gives you the name. so with the help of OpenCV, you can draw the box and name of the person.

hirwa145 commented 3 years ago

Which means i have to write a new code block for this

shubham-shahh commented 3 years ago

Which means i have to write a new code block for this

Depends on the approach

hirwa145 commented 3 years ago

Okay, now everything is working fine. But one more question, how can i calculate the value of net-scale-factor please. I want ti fine tune the probability. And offset value

shubham-shahh commented 3 years ago

Okay, now everything is working fine. But one more question, how can i calculate the value of net-scale-factor please. I want ti fine tune the probability. And offset value

I am not sure about that, you can find info on deepstream forums.