Closed YokkaBear closed 4 years ago
For training, you have to prepare 2D caricatures, ground-truth landmarks and vertices of ground-truth 3D meshes. The ground-truth meshes must have the same topology as 'data/sub_mesh.obj', which size is (3,6144). And they are created by the work https://arxiv.org/abs/1803.06802. You can download a dataset which contains 3D meshes recovered by our method from the link in the "Comparison with us" part and use OpenMesh to read their vertices. You can set that dataset as your training set to have a look. In the last, please remember to set the 'train_path' for your training set as "train_options.py" shown. If you have a new ground-truth 3D mesh set or different number of landmarks, you can change the set of '--vertex_num' and '--landmark_num' in "train_options.py". Good luck!
However, we could not take the ground-truth 3D meshes public, because they are created by the other method.
@RainbowRui Thank you for your quick and effective response, I will try making a training set out of the testset in the part "Comparison with us" later.
@RainbowRui Thank you for your kindly and useful advice, I have constructed a train set out of the dataset you have provided and finished the train step and obtained two models of epoch 1000, with the names of modelsresnet34_adam_1000.pth and modelsmynet_adam_1000.pth.
Next step, I want to generate some example meshes and landmarks with the two trained models to check their performance, but I did not find the generation code in this repo. So, I wonder how to generate the resulting meshes and landmarks out of the trained models?
Looking forward to your reply, thank you.
Firstly, you could set the '--test_*_path' as File "train_options.py" shown, e.g. '--test_lrecord_path' contains paths for saving estimated landmarks. This step can refer to the data provided by "Prepare some examples" in Part "Advanced Work". The images for testing must be preprocessed - crop the whole face roughly and resize to size (224, 224).
Secondly, if you set the path as the data provided by "Prepare some examples" in Part "Advanced Work", directly run:
mkdir record && mkdir record/lrecord && mkdir record/vrecord && python train.py --no_train --model1_path "model/resnet34_adam_1000.pth" --model2_path "model/mynet_adam_1000.pth"
, as File "test.sh".
Thirdly, when you obtain 3D vertices in '--test_vrecord_path', you could use File "data/sub_mesh.obj" or your mesh to recover the shape. 2D landmarks detected by our method and your pretrained models are all in '--test_lrecord_path'.
Our code contains training and testing part, I suggest you to do a test under the examples provided by "Prepare some examples" in Part "Advanced Work". And crop the testing images as the images in the examples.
Thanks for your attention.
Hi @RainbowRui , I was trying to recover the estimated 3D face shape from the npy files in record/vrecord
and the basic mesh data/sub_mesh.obj
, but the generation result seems not as expected, as shown in the Attachment 1.
My idea to recover the 3D face shape is as follows:
record/vrecord/2a7a46104330addb2d3e6777f78856b5_v.npy
) to np.array;data/sub_mesh.obj
. i.e. extract the ending 12004 lines of face data in data/sub_mesh.obj
and put them to np.array;And the code to implement this idea is in the Attachment 2. I think there must be some points mistaken by me, which leads to the wrong generation 3D face, especially the connection topology of the vertices. And I also wonder how you generated the resulting meshes as shown in the paper.
Looking forward to your thoughts and guidance on this problem, thank you very much.
Attachment 1:
obj_gen_results.zip
Attachment 2:
npy2obj.py.zip
Hi @YokkaBear, I suddenly discover you could not directly use your own ground-truth meshes, since the related data in Folder "./data" are based on the mean face in Folder "./toy_example". If you need to infer the logic of each related data, you can refer to "README.txt" of "./data" in Cloud and the code we supply, then replace all data according to your meshes. This repository is mainly for doing comparison with us conveniently, so I don't put the whole engineering in it. But in the future, I will try to public the whole dataset. Please understand the inconvenience! Best wishes!
Thank you pretty much for your updates and guidance, I have generated reliable 3D face meshes from both your pretrained model and the model trained 1000 epochs by the testset you have provided.
To make it clear, I'd like to talk more about my current project. The requirement of my project is the 3D face reconstruction of normal human faces, but not caricature faces as researched in the paper, which is a little different.
For now, I am going to make up a training dataset by another 3D face dataset to train your model. And my dataset contains both face images (.jpg) and face 3D meshes (.obj), which lacks the vertex data in form of .npy. My idea is to obtain the vertex data with the help of opencv and dlib, which can also detect the 68 landmarks of human faces. I wonder if it is reasonable to generate the vertex data in this way for my project, thank you~
@YokkaBear Excuse me. I don't quite understand what you mean. You say "my dataset contains both face images (.jpg) and face 3D meshes (.obj)", but I think the vertex data is inferred by face 3D mesh (.obj). Then you say "My idea is to obtain the vertex data with the help of opencv and dlib", so I think "the vertex data" may represent 68 landmarks. If I were you, I may search some open source paper of landmark detection in CVPR/ICCV/ECCV in the past several years, beacause some detection methods are mainly based on normal human faces and train on a very large dataset. Good luck!
Yeah, "the vertex data" actually means 68 landmarks, sorry for the misunderstanding expression. Thank you for your advice on the landmark detection method, I will try it later.
Sorry to bother you, I have another question: Is the proposed 3D-CaricatureFace model feasible for unsupervised learning, i.e. the 2D images and the 3D face shapes are unpaired? Thank you.
Excuse me. I have no answer to this question. Maybe you can try it.
@YokkaBear can i ask how you construct your own training dataset, I want to make a training data but i have no idea, thanks.
https://github.com/Juyong/CaricatureFace/issues/3#issuecomment-624435737 To construct one's own dataset, I mainly refer to the comment above provided by the anthor, hope it can help you.
#3 (comment) To construct one's own dataset, I mainly refer to the comment above provided by the anthor, hope it can help you.
Hi,@YokkaBear , did you train with your own mesh & caricatures? If I want to train, using mesh that different with the given mesh's topology is ok?
Hi, thank you for your brilliant paper and work. During running your source code, I have run the test code with pre-trained model successfully, however I met some trouble when running the train code with the following command in readme:
The error printed is as below:
At first, I wonder if any file for training is forgotten to put into this repository by the developer. However, I notice that the readme of the repo writes "Firstly, prepare a training set. ...". So I also wonder, if no file is forgotten to put into the repo, how could I build up a training set by the data files provided (those in form of links) in readme?
Great gratitudes if you could give some help.