Closed zhangtianjia closed 2 years ago
You need a face tracker, you can try some open source ones, eg [](url)
@gafniguy Hi, I used this tracker for dave, but the network does not converge. I think I made some mistakes to adapt their tracked parameters to your NerFace framework (e.g., their focal is normalized and they only provide axis-angle values). Could you please give more details about how you adapt those parameters to your NerFace code ?
I haven't actually tried using it, just linked it as an example for a repo that should do what we need. I'm not versed in the details of their tracker, but if it's a fixed focal length then in theory you should be able to plug it in as intrinsics (with eg cx=cy=0.5) in my pipeline.
If you manage to render correct overlays with my function render_debug_matrix, then you should be good to train the nerf.
@gafniguy Hi, I used this tracker for dave, but the network does not converge. I think I made some mistakes to adapt their tracked parameters to your NerFace framework (e.g., their focal is normalized and they only provide axis-angle values). Could you please give more details about how you adapt those parameters to your NerFace code ?
hi have you solve this problem when using video-head-tracker? If yes, could you please give some hints for generation the txt required?
@gafniguy Hi, I used this tracker for dave, but the network does not converge. I think I made some mistakes to adapt their tracked parameters to your NerFace framework (e.g., their focal is normalized and they only provide axis-angle values). Could you please give more details about how you adapt those parameters to your NerFace code ?
hi have you solve this problem when using video-head-tracker? If yes, could you please give some hints for generation the txt required?
@yaoyuan13 hay, bro. I met the same issue which you have met before, how to generate the required txt files, and have you finally find the answer for this?
Hello, thanks for your work. Recently I would like to apply your approach in my customized dataset. I have captured a portrait video. However, I do not know to to get the expression value and the transformation matrices. Could you provide more details about how to estimate those parameters?