fengju514 / Face-Pose-Net

Estimate 3D face pose (6DoF) or 11 parameters of 3x4 projection matrix by a Convolutional Neural Network
501 stars 109 forks source link

I have some questions about Pose Estimation #11

Closed Ostnie closed 6 years ago

Ostnie commented 6 years ago

Hello! Thanks very much for your paper ,and now I can solve most of the problems in my database,but there are still a little,. I think pose estimation will be very useful because I have both 3D point clloud and 2D images。That will be great if I can estimate the angle of the face deflection .

Now I have two questions about your code as below:

  1. I see you have said this code can estimate 6 degrees,does this mean it can only calculate 6 degree such as 0,22,40,55,75?
  2. Where is the outcome of the pose estimate? I guess it should be save as output_pose.lmdb,but I didn't find it in your project,Please forgive me that I have not run your code , because the model file is too large, the speed is not good,maybe I can know whether it can solve my problem before I run it sucessfully。 thanks for your great work!
fengju514 commented 6 years ago
  1. 6 degrees mean the rotations (pitch, yaw, roll) and translations in x,y,z-axis 0,22,40,55,75 degrees are the views that we can render the input image to the yaw-0,22,40,55,75 degrees, after getting 6 DoF 3D head pose and running our face renderer

  2. Yes, it is saved in ./output_pose.lmdb, as shown in line 25 of main_fpn.py and it will be feed into render_fpn.py in line 50 of main_fpn.py for the rendering.

Hope it answers your questions.

Ostnie commented 6 years ago

@fengju514 Hi, I met some problems about your code . When I run your example :python main_fpn.py input.csv ,the image size seems to have some problems.

Yaw value mean: 6.99334032717

Looking at file: ./tmp/subject10_a.jpg with model3Daug-00_00_10.mat Using pose model in model3Daug-00_00_10.mat Query image shape: (227, 227, 3) OpenCV Error: Assertion failed (dst.cols < 32767 && dst.rows < 32767 && src.cols < 32767 && src.rows < 32767) in remap, file /io/opencv/modules/imgproc/src/imgwarp.cpp, line 1749 Traceback (most recent call last): File "main_fpn.py", line 50, in renderer_fpn.render_fpn(outpu_proc, output_pose_db, output_render) File "/dfsdata/niejb1_data/face_pose_net/Face-Pose-Net/renderer_fpn.py", line 86, in render_fpn model3D.ref_U, eyemask, model3D.facemask, opts) File "face_renderer/renderer.py", line 109, in render frontal_raw = warpImg(img, ref_U.shape[0], ref_U.shape[1], prj_jnt, ind_jnt) File "face_renderer/renderer.py", line 25, in warpImg np.squeeze( np.asarray( prj[1,:] ) ).astype('float32'), cv2.INTER_CUBIC) cv2.error: /io/opencv/modules/imgproc/src/imgwarp.cpp:1749: error: (-215) dst.cols < 32767 && dst.rows < 32767 && src.cols < 32767 && src.rows < 32767 in function remap

I tried to solve this problem but failed ,I am not familiar with opencv, Do you know how to solve it ? Besides this,I also have another question about pose estimation ,my program has create output_pose.lmdb ,and I find there are two files(data.dmb and lock.dmb), But my office can't open them,when I tried it told me that the mdb file is unrecognized database format ,and I can't open them ,I want to know the pose estimation ,what should I do ?

fengju514 commented 6 years ago

Hi, Which version of OpenCV did you use? It seems the OpenCV version problem.

Regarding to read output_pose.lmdb, you can refer to render_fpn.py file where we use lmdb library to read 6 DoF of head pose in it. Specifically, line 2: import lmdb library line 35: pose_env = lmdb.open( output_pose_db, readonly=True )
line 36: pose_cnn_lmdb = pose_env.begin() line 49: pose_Rt_raw = pose_cnn_lmdb.get( image_key )

Ostnie commented 6 years ago

My OpenCV verision is 3.4 ,I don't know whether it is right I will have a try ,I have used print in getRts.py,line 106 to print the 6DoF just now, I guess I can also get the poe estimation,the result is as follow:

Predicted pose for: subject10_axxxx[0.0828 0.1953 1.0369 -19.8086 8.3903 3269.8223] Predicted pose for: subject10_a_flipxxxx[0.1372 -0.1481 -1.0675 25.0121 4.2864 3291.8276] Predicted pose for: subject3_axxxx[0.1396 0.4069 -0.2202 -8.1578 12.1156 3403.6431] Predicted pose for: subject3_a_flipxxxx[0.1313 -0.3446 0.2605 11.6528 12.8602 3375.0522] Predicted pose for: subject6_axxxx[0.1235 0.4499 0.0177 -13.9022 5.8946 2848.3384] Predicted pose for: subject6_a_flipxxxx[0.1251 -0.4836 -0.0162 18.4500 5.3253 3027.8032] Predicted pose for: subject8_axxxx[-0.0572 -0.5159 -0.3805 16.9527 14.2949 2846.6997] Predicted pose for: subject8_a_flipxxxx[-0.0367 0.6107 0.4362 -10.1132 17.7939 2783.0354] Predicted pose for: subject4_axxxx[-0.1550 -0.5044 -1.2095 20.0160 -1.1745 3321.9946] Predicted pose for: subject4_a_flipxxxx[-0.1614 0.4022 1.0657 -15.9736 2.6589 3432.5879] Predicted pose for: subject7_axxxx[0.1772 -0.0863 1.3731 -18.3121 -2.7105 3707.6379] Predicted pose for: subject7_a_flipxxxx[0.2081 0.1493 -1.3105 22.2377 2.3526 3658.6067] Predicted pose for: subject9_axxxx[0.1689 -0.4222 0.4526 8.4203 5.1751 2918.7305] Predicted pose for: subject9_a_flipxxxx[0.2188 0.5375 -0.4377 -3.8997 8.4659 2856.4653] Predicted pose for: subject5_axxxx[0.2679 0.5410 -0.2762 -10.4780 7.7077 2848.7217] Predicted pose for: subject5_a_flipxxxx[0.2686 -0.5379 0.2980 12.6942 8.9830 2642.2935] Predicted pose for: subject2_axxxx[0.3595 0.4574 -0.7810 2.5750 5.7493 2570.9556] Predicted pose for: subject2_a_flipxxxx[0.3201 -0.4537 0.7121 4.3236 6.0599 3035.1350] I have read your paper ,it's a great job , and now I have a little question about 6 Dof, it seems different from what I understand. In your paper ,I think the sequence is pitch,yaw,roll,Horizontal,Vertical,scaling,but the value I get means I am wrong ,and I'm not sure about unit ,I guess the first three are π。 Many thanks!

Ostnie commented 6 years ago

@fengju514 Hi,I have some questions on the bounding box I looked at the original image and the image saved in the tmp folder ,which does not seem to be directly cut with input.csv data, I checked the code, it seems that the input data is complicatedly calculated , I want to know if I need to prepare input data, how do I need to prepare it? Just follow the normal face detection box mark on it?

fengju514 commented 6 years ago

The code was tested with OpenCV 2.x, so I would suggest you to use it. the former three of 6 DoF are rotations with unit radian and the latter three are translations with unit mm

I'm not sure I understand your question "which does not seem to be directly cut with input.csv data"? Could you clarify it?

Yes, you need prepare for the input images and face box coordinates from the face detection or manual annotations.

Ostnie commented 6 years ago

@fengju514 em....I see ,if the images I use have been cut with the bounding box , How should I prepare the input data? "which does not seem to be directly cut with input.csv data" means , I tried to cut the image with your data in input.csv directly , but the result is different from the images in tmp folder . The input data is complicatedly calculated in your code and I don't konw why ,maybe the reason is not very import for me now (I think when I prepare my paper I shoud learn it carefully) , I just want to know if this would make the bounding box that I found using my own method would not be appropriate。 Many thanks !

Ostnie commented 6 years ago

@fengju514 Hi , I compared some different job about pose estimation ,To be honest ,I hope your library could be the best because I really agree wtih your opinion in your paper ,but in fact , your program always seems to produce a smaller angle value than the true 。 I remember you have said that this library can't be accurate when the angle is relatively large ,but in my experiment ,the range of angle is 0~45 degree,The result is still not good Although your performance is much better than the others in the presence of the image rotation ,if it can performance better in normal pose estimation ,it will be wonderful ,but now I don't know how to solve the problem that the estimation is small.

Ostnie commented 6 years ago

I plan to show some example for you,such photo 3 I get the result Predicted pose for: 3xxxx[0.2263 -0.2561 0.1469 0.1876 6.7562 3237.9316] Predicted pose for: 3_flipxxxx[0.2249 0.2906 -0.1260 3.6674 6.5916 3371.6890] When the degree is small it seems very well, is it?

I don't konw why I can't show two picture in a comment,another example is in the follow

Ostnie commented 6 years ago

7 Predicted pose for: 7xxxx[0.0650 -0.3533 0.1215 7.8251 1.9137 3312.7502] Predicted pose for: 7_flipxxxx[0.1051 0.3844 -0.0813 -3.2848 1.2154 3349.7358] The difference between the measured value and the estimated value is relatively large

Ostnie commented 6 years ago

input5 Even in your picture,the estimation seems not very well Predicted pose for: subject5_axxxx[0.2679 0.5410 -0.2762 -10.4780 7.7077 2848.7217] Predicted pose for: subject5_a_flipxxxx[0.2686 -0.5379 0.2980 12.6942 8.9830 2642.2935] I guess the true value of yaw is about 0.8 ,but the result is 0.5410

fengju514 commented 6 years ago

@Ostnie

The predicted yaw values would depend on the yaw range of the training data. So, the predicted ones and the visually yaw ones may have some differences

Ostnie commented 6 years ago

@fengju514 Thankyou for your answer , you have said that the predicted yaw values would depend on the yaw range of the training data. So how can I train a model by my own data ?

fengju514 commented 6 years ago

@Ostnie You can use my trained weights as initialization and finetune the network (suppose you use the same model structure as mine) on your own data