Open Haoru opened 5 years ago
I used the datasets of 512*512, the pose feature vector and the noise vector are 512 dimensional. Is this the problem?
Hi, your work is really great. I want to draw on the work of embedding the pose map in that part, but I don't see how the pose is organized and only see how it is called. Is the data also placed in the dataroot as an independent data set?Because I also want to do a feature embedding work.
Hi, your work is really great. I want to draw on the work of embedding the pose map in that part, but I don't see how the pose is organized and only see how it is called. Is the data also placed in the dataroot as an independent data set?Because I also want to do a feature embedding work.
Hey @hqulxw123 The pose files are included here https://github.com/yxgeee/FD-GAN#datasets , you can directly download them by the provided google/baidu links.
Hi, your work is really great. I want to draw on the work of embedding the pose map in that part, but I don't see how the pose is organized and only see how it is called. Is the data also placed in the dataroot as an independent data set?Because I also want to do a feature embedding work.
Hey @hqulxw123 The pose files are included here https://github.com/yxgeee/FD-GAN#datasets , you can directly download them by the provided google/baidu links.
Sorry, I didn't mean what I meant. I want to make a different auxiliary image instead of a pose, so I need an embedded model in your article to stitch the features. Since I started to research your code, it is not clear how to prepare my auxiliary dataset. Whether you generate the pose graph before input or call the model after input, your suggestion is very helpful to me.
Hi, your work is really great. I want to draw on the work of embedding the pose map in that part, but I don't see how the pose is organized and only see how it is called. Is the data also placed in the dataroot as an independent data set?Because I also want to do a feature embedding work.
Hey @hqulxw123 The pose files are included here https://github.com/yxgeee/FD-GAN#datasets , you can directly download them by the provided google/baidu links.
Sorry, I didn't mean what I meant. I want to make a different auxiliary image instead of a pose, so I need an embedded model in your article to stitch the features. Since I started to research your code, it is not clear how to prepare my auxiliary dataset. Whether you generate the pose graph before input or call the model after input, your suggestion is very helpful to me.
Hi @hqulxw123 I generate the pose graph after calling the pose files as input, which is saved in the format of human keypoint locations. You could prepare your auxiliary datasets according to https://github.com/yxgeee/FD-GAN/tree/master/reid/datasets .
Hi, your work is really great. I want to draw on the work of embedding the pose map in that part, but I don't see how the pose is organized and only see how it is called. Is the data also placed in the dataroot as an independent data set?Because I also want to do a feature embedding work.
Hey @hqulxw123 The pose files are included here https://github.com/yxgeee/FD-GAN#datasets , you can directly download them by the provided google/baidu links.
Sorry, I didn't mean what I meant. I want to make a different auxiliary image instead of a pose, so I need an embedded model in your article to stitch the features. Since I started to research your code, it is not clear how to prepare my auxiliary dataset. Whether you generate the pose graph before input or call the model after input, your suggestion is very helpful to me.
Hi @hqulxw123 I generate the pose graph after calling the pose files as input, which is saved in the format of human keypoint locations. You could prepare your auxiliary datasets according to https://github.com/yxgeee/FD-GAN/tree/master/reid/datasets .
Your work is so great, thank you for your kind reply.
Traceback (most recent call last): File "train.py", line 118, in
main()
File "train.py", line 78, in main
model.optimize_parameters()
File "/media/ouc/4T_B/zhr/FD-GAN/fdgan/model.py", line 218, in optimize_parameters
self.forward()
File "/media/ouc/4T_B/zhr/FD-GAN/fdgan/model.py", line 158, in forward
self.fake = self.net_G(B_map, A_id.view(A_id.size(0), A_id.size(1), 1, 1), z.view(z.size(0), z.size(1), 1, 1))
File "/home/ouc/miniconda3/envs/gogo/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, kwargs)
File "/home/ouc/miniconda3/envs/gogo/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 71, in forward
return self.module(*inputs[0], *kwargs[0])
File "/home/ouc/miniconda3/envs/gogo/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(input, kwargs)
File "/media/ouc/4T_B/zhr/FD-GAN/fdgan/networks.py", line 175, in forward
feature = torch.cat((reid_feature, pose_feature, noise),dim=1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 1 and 9 in dimension 2 at /pytorch/torch/lib/THC/generic/THCTensorMath.cu:111
I have encountered this error in the stage II. Can you help me ?