Open hirwa145 opened 3 years ago
So after getting picke file, i will use test_facenet_trt.py
to train the model with the new created pickle file?
So after getting picke file, i will use
test_facenet_trt.py
to train the model with the new created pickle file?
Creating a pickle file means you have all the embeddings of the face. now all you need to do is, match the embedding received from the deepstream app and the ones in the pickle file and find the match.
You haven't implemented it yet?
You haven't implemented it yet?
I don't Have a Jetson with me Rn so I'm unable to work on it. sorry.
Understandable, i will try to implement and will consult you if i have any question
Understandable, i will try to implement and will consult you if i have any question
sure, ill help you with the doubts, and thanks for understanding
What is the cause of flickering bbox during detection?
What is the cause of flickering bbox during detection?
I am not aware of any flickering, could you please post a gif or short video demonstrating the same.
Can see it?
I managed to solve. The cause of flickering was because of the interval=1 in Deepstream app, Which means the detector was skipping some frames. I set it to interval=0. And there was no flickering anymore.
Can see it?
No, But I am glad you solved it.
Do i have to use tf version>2?
for what?
For last part of face embeddings, do i have to make new python script?
For last part of face embeddings, do i have to make new python script?
It's upto you. I'll give you a stepwise guide
First, run the app on a video of a person, save all rhe embeddings it generated in a pickle file.
Please have a look at This tutorial, how he generates the embeddings and compares them
what is the next step after getting embedding
what is the next step after getting embedding
Did you create the pickle file with list of embeddings?
Yes, I finished it.
Do i have to run test_facenet_trt.py
script with soecified location of a pickle file?
Yes, I finished it.
now all you have to do is compare the embeddings.
The problem is how do i do that?
Do i have to run
test_facenet_trt.py
script with soecified location of a pickle file
Do i use this?
Do i have to run
test_facenet_trt.py
script with soecified location of a pickle fileDo i use this?
not netessary, as it uses MTCNN for first stage
The problem is how do i do that?
This this tutorial covers that
Mhm, it can be applied same way to the Deepstream facenet app?
Mhm, it can be applied same way to the Deepstream facenet app?
Yes, the embeddings part.
Amd what about the comparing part?
Amd what about the comparing part?
It briefly explains the comparing part as well.
It work only with python implementation. Is there a way to make it work with CPP implementation?
In python implementation, which part of code that does output those vectors for face features extracted?
It work only with python implementation. Is there a way to make it work with CPP implementation?
The logic will remain the same
In python implementation, which part of code that does output those vectors for face features extracted?
I know that is the python code responsible for all facenet actions. I wanted to know which line in that file, that is outputing/producing those vector(embeddings). That will be very helpful
I know that is the python code responsible for all facenet actions. I wanted to know which line in that file, that is outputing/producing those vector(embeddings). That will be very helpful
Hi, the link mentioned above is the permalink to that line.
How do i count count avg mean and avg std for embeddings? For example, i calculated vector distance between 2 photos of Obama, and i got average of 0.4587.... And when i compare the photo of Obama and elton john or benaffleck photos, i get average of 1.562....
How do i calculTe avg mean avg std from these info?
@shubham-shahh I managed to be able to predict names of the people in the video correctly. But names are displayed in the terminal but not around the bbox, how can i achieve that?
@shubham-shahh I managed to be able to predict names of the people in the video correctly. But names are displayed in the terminal but not around the bbox, how can i achieve that?
Hi, you need to update Deepstream's bbox function.
You mean in nvdsparsebbox_Yolo.cpp?
You mean in nvdsparsebbox_Yolo.cpp?
No, if I am not mistaken that is for the bbox from the pgie, so until we get the pgie we don't have the name of the person.
So how to i change the bbox function? I used Python implementation
So how to i change the bbox function? I used Python implementation
one approach I would use is, to draw on the stream after sgie gives you the name. so with the help of OpenCV, you can draw the box and name of the person.
Which means i have to write a new code block for this
Which means i have to write a new code block for this
Depends on the approach
Okay, now everything is working fine. But one more question, how can i calculate the value of net-scale-factor
please. I want ti fine tune the probability. And offset value
Okay, now everything is working fine. But one more question, how can i calculate the value of
net-scale-factor
please. I want ti fine tune the probability. And offset value
I am not sure about that, you can find info on deepstream forums.
Is there a way i can deploy using Nvidia Deepstream ? Or create Deepstream app from this?