Open rezalahmi opened 1 year ago
Hi, could you solve this issue? I have the same problem.
Hi, could you solve this issue? I have the same problem.
Hi, Unfortunately no
Hi!
@rezalahmi , is it the demo.py
script you're running, but with different type_of_output
values? are you sure that the input to both inference runs is the same?
I think the problem is that the model expects a loose head crop (see Section 4 "Method" of the paper), and here the image you're using has much more space around the head than it's required. The demo script uses different branches of the network for the "68 points" type of output and all the other subsets of points (2D branch and reprojections of 3D branch, respectively). While the 2D branch is quite robust to different inputs, - the 3D branch is quite sensitive to the head crop provided. Hope this helps to understand better the discrepancy in the results. For better results, please use either off-the-shelf or custom head detector, and use offset of ~10% (I refer you to https://github.com/PinataFarms/DAD-3DHeads/blob/3acc5c2a1177d354a1247c49e44a83ad682ea6a1/model_training/data/flame_dataset.py#L94 for the context, L96-99).
I think something is wrong!!!! the model can detect 68 landmarks but for more landmarks and mesh, generate wrong landmarks