soubhiksanyal / now_evaluation

This is the official repository for evaluation on the NoW Benchmark Dataset. The goal of the NoW benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods from a single image under variations in viewing angle, lighting, and common occlusions.
https://now.is.tue.mpg.de/
105 stars 15 forks source link

The evaluation speed is slow #2

Closed ChunLLee closed 3 years ago

ChunLLee commented 3 years ago

Dear authors, thank you for providing the codes. My issue is, it seems quite time-consuming to run 'compute_error.py'. It took me about 1 to 2 hours to process one image. Is it normal?

Thank you in advance for your help.

TimoBolkart commented 3 years ago

No that is certainly not normal, it should take less than a minute to compute the error for one image. There might be either a problem reading your mesh input, or problems computing the scan-to-mesh error (which can be caused by degenerated triangles etc).

As a simple test, pleas try to load the mesh file you have problems for, within an ipython by

from psbody.mesh import Mesh
mesh = Mesh(filename=mesh_filename)
print(mesh.v)
print(mesh.f)

Does that print the vertices and faces of the mesh?

ChunLLee commented 3 years ago

Thank you for your prompt reply. I have tried the provided code and it seems to operate normally, and is done within a second. I think it is the error computing part which is time-consuming. Is it caused by the different triangles we use?

TimoBolkart commented 3 years ago

The scan-to-mesh distance we use is differentiable as we perform some rigid alignment step (to mitigate effects of differences in the landmark embedding). We have seen that the used distance has difficulties for degenerated triangles (i.e. 2 vertices of one triangle have an identical location). If you provide me with the mesh that you have trouble with, I can look into the problem.

ChunLLee commented 3 years ago

Sure, thank you.

ChunLLee commented 3 years ago

Solved by @TimoBolkart : should not add color in the .obj file Thank you very much!

AyushP123 commented 2 years ago

Hi @TimoBolkart, I am also facing this issue it takes 2 minutes and 4 seconds per image for me. I am trying to establish a baseline for https://github.com/sicxu/Deep3DFaceRecon_pytorch/issues and have removed the color and am saving the mesh as follows

mesh = trimesh.Trimesh(vertices=recon_shape, faces=tri)

Do let me know if you want to have a look at the obj file.

TimoBolkart commented 2 years ago

Hi @AyushP123, most commonly, if the rigid alignment is so slow, it is either that the mesh is not properly loaded (i.e., because vertex colors are part of the files which you mentioned you removed) or the landmarks are wrong. If the landmarks are not right, the initial rigid alignment fails and so the rigid alignment based on the scan-to-mesh distance does not converge.

Checklist:

  1. After loading the predicted mesh in compute_error.py https://github.com/soubhiksanyal/now_evaluation/blob/a5c348f004edbcbdf7b61cb51e83c707961414ba/compute_error.py#L157 output the loaded mesh as predicted_mesh.write_obj('YOUR_OUTPUT_FILENAME.obj') and try to load this output mesh with MeshLab. If you can load this and it looks like your input mesh, it is not the loader causing the problem.
  2. Check the initial rigid alignment in scan2mesh_computations.py https://github.com/soubhiksanyal/now_evaluation/blob/a5c348f004edbcbdf7b61cb51e83c707961414ba/scan2mesh_computations.py#L111 by Mesh(predicted_mesh_vertices_aligned, predicted_mesh_faces).write_obj('YOUR_OUTPUT_FILENAME.obj'), is this rigidly aligned to the input scan that you can output here with masked_gt_scan.write_obj('gt_scan_val.obj')? If not, your landmarks are wrong.
AyushP123 commented 2 years ago

Hi @TimoBolkart,

Thank you for your response. The initial rigid alignment in scan2mesh_computations.py is wrong. I have visualized the landmarks and they seem to be correct and what is expected by now benchmark and I am unable to understand what is going wrong here. I am attaching a sample here for your reference, this corresponds to the individual FaMoS_180424_03335_TA with IMG_0054.jpg file in multiview_expressions:

https://drive.google.com/drive/folders/1QKb77QQuoKMhMyRMGuvfu2U73E67nM_Y?usp=sharing

TimoBolkart commented 2 years ago

Hi @AyushP123,

The landmarks you provided are correct, that is true. If I run the now_evaluation code with your example, the rigid alignment looks like this, which seems correct. image Do you have any changes in the now_evaluation code? It seems the code "as is" would not load your landmarks as the naming convention of your landmarks is not correct (it is supposed to be named like the mesh but with .txt or .npy ending instead).

AyushP123 commented 2 years ago

Thank you so much for your response @TimoBolkart, that was the bug in my code for generating the landmarks. I just have one more doubt: After running compute_error.py I get a list of per-vertex distances for all the predicted meshes. I wanted to know how do we compute the mean, std, and median of the errors that is finally evaluated for a model.

Do we

  1. Compute a mean error per predicted mesh i.e compute mean error across vertices for each mesh to get a single value for each mesh and then compute mean, median, and std for all the predicted meshes.
  2. Consider all the errors predicted across all vertices across all meshes as a collection and compute the mean, median, and std for this whole collection.

Once again thank you for your responses they were super helpful.

TimoBolkart commented 2 years ago

Great, thank you for the feedback.

We compute the mean, median, std across all distances (all scans do have a similar number of vertices). Please have a look at the provided cumulative_errors.py script, which generates the cumulative error curves and also computes the mean, median, and std errors.