The code (pytorch for testing & matlab for 3D plot and evaluation) for our project: Joint 3D Face Reconstruction and Dense Face Alignment from A Single Image with 2D-Assisted Self-Supervised Learning (2DASL)
Hi, i recently read your paper, but i have a little confusion about the backward pass. In your paper, you show that you backward pass your predicted 2d landmarks to output x^2d, do you means that you replace x2d with x^2d to generate 2d FLMs and keep the input image unchanged and restart forward training? If i am understanding it in a wrong way, could you please describe the backward pass with more details?
Hi, i recently read your paper, but i have a little confusion about the backward pass. In your paper, you show that you backward pass your predicted 2d landmarks to output x^2d, do you means that you replace x2d with x^2d to generate 2d FLMs and keep the input image unchanged and restart forward training? If i am understanding it in a wrong way, could you please describe the backward pass with more details?