PyTorch implementation for Head2Head and Head2Head++. It can be used to fully transfer the head pose, facial expression and eye movements from a source video to a target identity.
LFSM dataset registration now returns a 404, so in short what type of inference can still be done with this library if LFSM as a dataset is unavailable?
Can new faces be trained, and how does LFSM factor into this process as per the README?
Can you provide the format of LFSM data used so that alternatives can be formatted to meet that need?
LFSM dataset registration now returns a 404, so in short what type of inference can still be done with this library if LFSM as a dataset is unavailable?
Can new faces be trained, and how does LFSM factor into this process as per the README?
Can you provide the format of LFSM data used so that alternatives can be formatted to meet that need?