Open czarrar opened 7 years ago
Sure, though you would need to dig into the code a bit. But it shouldn't be hard: instead of projecting the original texture, you can simple project LM points (say, the blue points marking the detected LM positions in the input image) to the output.
@TalHassner
Can you elaborate more detail?
See our FAME journal paper or our earlier paper on expnet and the code associated with these projects. We project landmarks using the same data structure as in the frontalization paper; we use the same technique.
@TalHassner
as suggested in https://github.com/dougsouza/face-frontalization/issues/16, I tried to project 3D landmark using the same projection matrix.
threedee2 = np.reshape(lmk, (-1, 3), order='F').transpose()
temp_proj2 = proj_matrix * np.vstack((threedee2, np.ones((1, threedee2.shape[1]))))
project2 = np.divide(temp_proj2[0:2, :], np.tile(temp_proj2[2, :], (2,1)))
where lmk is model_TD
(https://github.com/iacopomasi/face_specific_augm/blob/master/ThreeD_Model.py#L21).
But it seems that this is not right.
one of the output image is
it's projected into the input image again.
I think we need the landmark position in ref_U
if we want to map landmark to the output image.
Any advise to correct this?
@twmht seems from your last image that landmarks are falling in the correct place. The difference between the two is a crop delta (dx, dy) which was not added to the 2D landmarks (after projection) in the bigger image. You get this 2D delta from the top left corner of the crop window.
Importantly, without fitting a 3D face shape and expression, the 3D landmarks are always those of a fixed 3D face shape and so will not accurately match the features in the image. I again refer you to our FAME paper for more details.
@TalHassner
yup. I have read the paper last week, but still can't figure out how to do the mapping without the index of ref_U.
by the way, I found that there is a possible way in the code (https://github.com/iacopomasi/face_specific_augm/blob/master/ThreeD_Model.py#L35).
which computes the index with minimum distance between the 3D landmark and the cloud points.
So I modify the code
bg_mask = np.setdiff1d( np.arange(0, ref_U.shape[0]*ref_U.shape[1]) ,facemask[:,0] )
face_proj = project[:, facemask[:,0] ]
lmk_proj = project[:, lmk]
where lmk
is the index of landmark in ref_U
.
here is the expected output
@twmht do you have a fork with the above code?
Is it possible to project the input landmark points to the frontalized face? Thanks!