patrikhuber / eos

A lightweight 3D Morphable Face Model library in modern C++
Apache License 2.0
1.9k stars 597 forks source link

3D model picks up some background image? #100

Closed manish-grafty closed 7 years ago

manish-grafty commented 7 years ago

Hi Patrick, In the attached example: I dont understand why a portion of the green background is overlaid on the model. I am attaching dlib output, input image and output image. Any ideas?

i2-face-fit side-face

patrikhuber commented 7 years ago

You can display the fitted mesh with the draw_wireframe function, this will give you an idea of the final fitting and where the texture will be extracted from. There's only very little background extracted in your example, but in general this can happen because of imprecise landmarks (for example your landmarks are not very precise around the chin). We're doing a least-squares fit, fitting to all landmarks, so on some landmarks it'll usually be a couple of pixels off. The wireframe will show you.

Also you can avoid this somewhat by rejecting regions on the texture which are close to being occluded (i.e. facing away more than 60° from the user or so). The extract_texture has a parameter compute_view_angle for that.

patrikhuber commented 7 years ago

Can this be closed? Did you make any progress? As mentioned above, it's not any software bug or issue.

manish-grafty commented 7 years ago

Hi Patrick, Yes - this can be closed. I managed to reduce it, but could not eliminate it as a generic solution.

Is there a way to mirror one side of the face to the other in case one side has lesser holes than the other.

thanks,

On Mar 5, 2017, at 9:26 AM, Patrik Huber notifications@github.com wrote:

Can this be closed? Did you make any progress? As mentioned above, it's not any software bug or issue.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/patrikhuber/eos/issues/100#issuecomment-284245117, or mute the thread https://github.com/notifications/unsubscribe-auth/AK2LF-seGOYmdUvJ0pg5X-ScePi-zN3Xks5rivBPgaJpZM4Lzvvs.

patrikhuber commented 7 years ago

Ok, cool! I'll close this then.

Yes, of course you can mirror, for example you can mirror the texture map. You probably want to do some blending afterwards, as usually the two sides are lit quite differently and this creates visible artifacts.

jjjjohnson commented 6 years ago

Hi @manish-grafty, I am curious what software/library did you use to visualize the 3D model you posted above? Thanks, Junjie

patrikhuber commented 6 years ago

@jjjjohnson You can use MeshLab or Blender - I'd recommend MeshLab for a beginner. (It looks like in the screenshot above, something else was used though).

jjjjohnson commented 6 years ago

Thanks a lot! @patrikhuber I am wondering how to save the isomap and mesh into a file so that the MeshLab can open it.

(mesh, pose, shape_coeffs, blendshape_coeffs) = eos.fitting.fit_shape_and_pose(model, blendshapes,
            landmarks, landmark_ids, landmark_mapper, w, h, edge_topology, contour_landmarks, model_contour)

isomap = eos.render.extract_texture(mesh, pose, img
patrikhuber commented 6 years ago

eos::core::write_textured_obj(...) is your friend. I think there's an example of it in fit-model.cpp.

jjjjohnson commented 6 years ago

Hi @patrikhuber Thanks for the prompt reply. However, I think mesh only represent the 3d shape(3448 vertices) which does not have the color information as show in the picture above. So I assume I also need a isomap file to reconstruct the 3d model.

patrikhuber commented 6 years ago

@jjjjohnson Just look at the example... it stores the texture as well.

jjjjohnson commented 6 years ago

Hi @patrikhuber I managed to use mashLab to visualize the 3d face w/ merged_isomap. Thanks a lot!!! However I am using mesh only from one picture while using merged_isomap. Is it possible to merge mesh in Python? I have reviewed the c++ code but could not figure out what to do in Python:

const Eigen::VectorXf merged_shape =
                    morphable_model.get_shape_model().draw_sample(shape_coefficients) +
                    to_matrix(blendshapes) *
                    Eigen::Map<const Eigen::VectorXf>(blendshape_coefficients.data(), blendshape_coefficients.size());

const core::Mesh merged_mesh = morphablemodel::sample_to_mesh(merged_shape, morphable_model.get_color_model().get_mean(), morphable_model.get_shape_model().get_triangle_list(), morphable_model.get_color_model().get_triangle_list(), morphable_model.get_texture_coordinates());
patrikhuber commented 6 years ago

@jjjjohnson Sure - you should be able to do that with the Python bindings out of the box. How you merge the shapes is up to you. But the line you highlighted just "assembles" the shape instance (also called sample sometimes). It's just the standard $ S = \mu + ShapeBasis \times \alpha + Blendshapes \times BlendshapeCoefficients $ formula you can find and read more about in 3DMM papers (for example our VISAPP paper).

If you are interested in that then we also offer consulting work for eos on an hourly or project basis - feel free to drop me an email if that is of interest to you.

jjjjohnson commented 6 years ago

Thanks @patrikhuber I can calculate merged_shape by just transferring it to python version. But looks like there is no morphablemodel::sample_to_mesh in python such as eos.morphablemodel.sample_to_mesh So I have to define it myself?

patrikhuber commented 6 years ago

@jjjjohnson There's a draw_sample overload in the Python bindings that you can use to get a Mesh from coefficients.

jjjjohnson commented 6 years ago

Hi @patrikhuber looks like model.get_shape_model().draw_sample(merged_shape_coefficients) returns the numpy array rather than mesh object. I cannot save it by eos.core.write_textured_obj(combined_mesh, 'mesh.obj'). Could you help me with that?

Thanks a lot! Junjie

patrikhuber commented 6 years ago

@jjjjohnson Please keep this issue on topic. And the GitHub issues are for library issues - please read the code and documentation. MorphableModel::draw_sample(...) returns a Mesh.