Hi, I have noticed that the picture in the demo was split into some pieces and then feed into the inference engine so all the face could be recognized. Why? Can i just resize the picture into a bigger size and then feed it to Inference Engine and get the same result? When processing the video split every frame seems impossible.All can I change the model to let it accept the bigger size dimension of the images?
Hi, I have noticed that the picture in the demo was split into some pieces and then feed into the inference engine so all the face could be recognized. Why? Can i just resize the picture into a bigger size and then feed it to Inference Engine and get the same result? When processing the video split every frame seems impossible.All can I change the model to let it accept the bigger size dimension of the images?