Closed BlairLeng closed 4 years ago
+1
plot ur result of your trained network
actually they put head pose data into the model like this:
self.fc2 = nn.Linear(502, 2)
so when u want to pltt the reasult , u may put face images with face head pose into the network
Sorry for the late reply. Now that I've added the demo program, you can use it to visualize the gaze estimation results. See https://github.com/hysts/pytorch_mpiigaze#demo
This is also a stand-alone demo program. https://github.com/hysts/pytorch_mpiigaze_demo
After I trained the model, how can I use that? Do I just capture the video and feed it to the model? I read your paper which said the image is required and the angle is also required. Would you like please tell me more about that? After I trained the model, what should I do?
Thank you very much!