Closed marzi9696 closed 4 years ago
If you already have a tight face bounding box, you can set it directly on the Tracker
object:
tracker.faces = [(x, y, w, h)]
tracker.detected = 1
If max_faces
is set to 1, this should skip the detection model. If the bounding boxes are not tight but a bit looser, it might be good to reduce the factors in the four lines here to zero, as they are intended to leave a bit of space around the face.
Looking at your code, you still have those lines, but since you are running the model on just the face, they probably won't do anything, because there is nothing left on the frame to use as margin. Perhaps your face crops are too tight for the model? The model is used to faces with on average 10% margin on all sides around where the landmarks would be. I don't really see anything else that should lower accuracy though. In the line crop_info.append((crop_x1, crop_y1, scale_x, scale_y,1))
the 1 at the end should be 0.1 though.
Thank you so much.yeah I guessed the same thing that the faces are too tight cause I saved both frames the ones your model detects and the ones I've already got and mine were tighter than yours. Thank you so much for answering my questions ❤
Hi.thanks for answering the last issue I posted and thanks again for this great repo. I almost figured the code out and tried to customize it for my own purpose of use the only problem here is that the frames I want to use are frames of faces and I don't want to use the detection model . I tested this approach and it decreased the model accuracy.now my question is: could you please tell me exactly what kind of preprocessing has to be done on detected faces frames before feeding it to the landmark detector?or what could possibly be the reason for this decrease in model accuracy after deleting the detection part.
` def predict(self, frame, additional_faces=[]): self.frame_count += 1 start = time.perf_counter() im = frame
Thank you :)