Open emlcpfx opened 1 week ago
Hi there, the output is simply the trained weights for the 3 encoders. These are saved in the log_path. The training data are the same datasets for the smirk pipeline, but only the predicted landmarks and the predicted MICA shape parameters are used as targets. The existing code fully support this.
Does that mean it’s virtually impossible to train this to handle extreme profile or even head turned almost backwards?
Because if MICA can’t place the head in those situations, then you can’t generate the training data?
What is the file format that gets outputted from this? Do you have an example image and data pair you could share?