haofanwang / accurate-head-pose

Pytorch code for Hybrid Coarse-fine Classification for Head Pose Estimation
98 stars 22 forks source link

Txt Annotations for multi datasets #5

Closed SnowRipple closed 5 years ago

SnowRipple commented 5 years ago

Hi @haofanwang !

Many thanks for sharing your work! However right now this code is impossible to run due to missing files. All "multi" datasets require labels in txt format, which are missing. Can you provide them please?

Also if you could make the files available on some other location than baidu (which is inaccessible from outside of China) that would be much appreciated :)

haofanwang commented 5 years ago

Hi @SnowRipple, thanks for following our work, I have already put all label files in ./tools, please check it. I'm very busy now, but I would try to put pertained models on Google Drive recently.

You could also use the train.py to run your on model. If you have any other questions about the training code, feel free to ask.

SnowRipple commented 5 years ago

Many thanks @haofanwang for replying so fast!

The text files in /tools contain the list of files per each dataset (file1,file2 etc), however your definitions of _multidatasets expect pose labels to be txt, even thought they are mat in the original datasets, e.g.: datasets.py line 38:

    pose_path = os.path.join(self.data_dir, self.y_train[index] + '_pose' + self.annot_ext)

where self.annot_ext is "txt". That's why I am confused, why txt and not mat?

haofanwang commented 5 years ago

Files in ./tools just save the relative path of each image, rather than pose label. You can download the datasets such as AFLW or BIWI, and you will find corresponding .txt or .mat file for each image. @SnowRipple

SnowRipple commented 5 years ago

Hi @haofanwang, fixed the datasets problem, but I'm getting an error when trying to backward propagate all losses with torch.autograd.backward(loss_seq, grad_seq):

/opt/conda/conda-bld/pytorch_1532582123400/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype , Dtype , Dtype , long , Dtype , int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion t >= 0 && t < n_classes failed. Traceback (most recent call last): File "/home/snow_ripple/workspace/accurate-head-pose/train_hopenet.py", line 227, in loss_reg_yaw = reg_criterion(yaw_predicted, label_yaw_cont) File "/home/snow_ripple/anaconda3/envs/playground/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(input, kwargs) File "/home/snow_ripple/anaconda3/envs/playground/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 421, in forward return F.mse_loss(input, target, reduction=self.reduction) File "/home/snow_ripple/anaconda3/envs/playground/lib/python3.7/site-packages/torch/nn/functional.py", line 1716, in mse_loss return _pointwise_loss(lambda a, b: (a - b) 2, torch._C._nn.mse_loss, input, target, reduction) File "/home/snow_ripple/anaconda3/envs/playground/lib/python3.7/site-packages/torch/nn/functional.py", line 1674, in _pointwise_loss return lambd_optimized(input, target, reduction) RuntimeError: reduce failed to synchronize: device-side assert triggered

Any suggestions?

haofanwang commented 5 years ago

You can set num_worker to 0 when you load your dataset. Please let me you if have further questions. @SnowRipple

haofanwang commented 5 years ago

It seems that you have figured your problem out. Closed.