JDAI-CV / Down-to-the-Last-Detail-Virtual-Try-on-with-Detail-Carving

Virtural try-on under arbitrary poses
MIT License
217 stars 65 forks source link

I have some problems when running demo.sh #6

Closed lwhkop closed 4 years ago

lwhkop commented 4 years ago
  1. I have downloaded the dataset and split the dataset as train/test folder. But there are many folders without label in all.rar and many duplicate samples in the txt, so i just got 1417 folers in test folder, and 6443 folders in train folder. I dont know is it the right way to process the data. Would u mind introducing how to process the original data and giving the numbers of the train datas and test datas?

  2. When I use the data that I have split to run the demo.sh, I got a error as follow: Traceback (most recent call last): File "demo.py", line 185, in forward(opt, paths, 4, opt.forward_save_path) File "demo.py", line 107, in forward for i, result in enumerate(val_dataloader): File "D:\anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 346, in next data = self.dataset_fetcher.fetch(index) # may raise StopIteration File "D:\anaconda\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\anaconda\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "E:\MPV_Dataset\Detailed-virtual-try-on\data\demo_dataset.py", line 74, in getitem source_splitext = os.path.join(source_splitext.split('/')[0], source_splitext.split('/')[2]) IndexError: list index out of range

Then I make opt.warp_cloth False , I got another error: Traceback (most recent call last): File "demo.py", line 185, in forward(opt, paths, 4, opt.forward_save_path) File "demo.py", line 107, in forward for i, result in enumerate(val_dataloader): File "D:\anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 346, in next data = self.dataset_fetcher.fetch(index) # may raise StopIteration File "D:\anaconda\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\anaconda\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "E:\MPV_Dataset\Detailed-virtual-try-on\data\demo_dataset.py", line 86, in getitem source_parse_shape = self.transforms'1' # [-1,1] File "D:\anaconda\lib\site-packages\torchvision\transforms\transforms.py", line 61, in call img = t(img) File "D:\anaconda\lib\site-packages\torchvision\transforms\transforms.py", line 166, in call return F.normalize(tensor, self.mean, self.std, self.inplace) File "D:\anaconda\lib\site-packages\torchvision\transforms\functional.py", line 217, in normalize tensor.sub(mean[:, None, None]).div(std[:, None, None]) IndexError: too many indices for tensor of dimension 0

Would u mind giving some suggestions to me to solve it ? Thanks.

thanhtin1997 commented 4 years ago

In demo.py you edit [line93] "transforms.Normalize((0.5), (0.5))" >> "transforms.Normalize((0.5,), (0.5,)" I'm using pytorch https://download.pytorch.org/whl/cu100/torch-1.2.0-cp36-cp36m-win_amd64.whl torchvision==0.2.2.post3

thanhtin1997 commented 4 years ago

if you use pytorch 0.4.1 you add augment['3'] = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) augment['1'] = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])