AaronJackson / vrn

:man: Code for "Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression"
http://aaronsplace.co.uk/papers/jackson2017recon/
MIT License
4.52k stars 747 forks source link

whether it is more harder to train if neck & ears been added? #137

Closed wangxinjun56 closed 2 years ago

AaronJackson commented 4 years ago

Probably depends on the quality of the dataset. I'm not aware of any large face scan datasets which will work well on in the wild images, which also have properly aligned necks and ears.

wangxinjun56 commented 4 years ago

i make samples like this, is it will be more harder to train? : QQ图片20191015144224 QQ图片20191015144235

AaronJackson commented 4 years ago

It looks ok, but I can only see one sample. The most important thing is for the data to be consistent throughout the whole dataset.

wangxinjun56 commented 4 years ago

consistent is impossible, as the pose and size etc, are not the same; but they are look like the same change rules,so,is it can be train good ?

AaronJackson commented 4 years ago

Well, you'd probably get better performance by fixing the size. It's common to normalise the size of the face in some way regardless of what you are doing. Pose will of course need to vary if you want it to work on different poses.

If you do what I did in the paper, it should train fine.

wangxinjun56 commented 4 years ago

thanks ,i will normalise the size, just now i tryed one hourglass ; it looks like it ' s not to be convergenced, have you tryed one hourglass ?

wangxinjun56 commented 4 years ago

i used 100 samples, two hourglass, loss=sigmoid_cross_entropy_with_logits, (256256200) the result like this : QQ截图20191017083626 QQ截图20191017083644 QQ截图20191017083656 QQ截图20191017083715

wangxinjun56 commented 4 years ago

QQ截图20191017084818

the sum loss , is the sum of all losses 256256200; so even if the sum loss to be trained to very samll, but can't promise the every one loss is the same small .

like this: s = (x+y+z)/3 we make s is small ; but may be x is so big ,and y is so small ...not promis x and y are the same small

if s = 1; then x =1, y=1,z=1; also could be , x = 2,y=0.5,z=0.5; so the x is not samll loss so the vol can't be train good .

i 'm just a litte think about it ;may be i'm wrong

AaronJackson commented 4 years ago

100 training samples? No chance

wangxinjun56 commented 4 years ago

so ,how many samples at least?

huyanfei-cqupt commented 4 years ago

so ,how many samples at least?

@wangyinjun56 Hi, After reading your comments, I have some questions that I would like to share with you; I would like to ask you some questions, and I hope you can help me answer them. Is it convenient to leave your email address? Looking forward to your reply. thaks

gernhard1337 commented 3 years ago

Hello. @wangxinjun56 could you be so kind and share your training code? i am kinda new to this topic and i am not sure how to train it myself. I got a very detailed database and wanted too see if a trained model with high resolution data can improve output.

AaronJackson commented 2 years ago

Just closing off some old issues.