Closed SnowRipple closed 5 years ago
Hi @SnowRipple, thanks for following our work, I have already put all label files in ./tools, please check it. I'm very busy now, but I would try to put pertained models on Google Drive recently.
You could also use the train.py to run your on model. If you have any other questions about the training code, feel free to ask.
Many thanks @haofanwang for replying so fast!
The text files in /tools contain the list of files per each dataset (file1,file2 etc), however your definitions of _multidatasets expect pose labels to be txt, even thought they are mat in the original datasets, e.g.: datasets.py line 38:
pose_path = os.path.join(self.data_dir, self.y_train[index] + '_pose' + self.annot_ext)
where self.annot_ext is "txt". That's why I am confused, why txt and not mat?
Files in ./tools just save the relative path of each image, rather than pose label. You can download the datasets such as AFLW or BIWI, and you will find corresponding .txt or .mat file for each image. @SnowRipple
Hi @haofanwang, fixed the datasets problem, but I'm getting an error when trying to backward propagate all losses with torch.autograd.backward(loss_seq, grad_seq):
/opt/conda/conda-bld/pytorch_1532582123400/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype , Dtype , Dtype , long , Dtype , int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion t >= 0 && t < n_classes
failed.
Traceback (most recent call last):
File "/home/snow_ripple/workspace/accurate-head-pose/train_hopenet.py", line 227, in
Any suggestions?
You can set num_worker to 0 when you load your dataset. Please let me you if have further questions. @SnowRipple
It seems that you have figured your problem out. Closed.
Hi @haofanwang !
Many thanks for sharing your work! However right now this code is impossible to run due to missing files. All "multi" datasets require labels in txt format, which are missing. Can you provide them please?
Also if you could make the files available on some other location than baidu (which is inaccessible from outside of China) that would be much appreciated :)