I am getting the error described below when runing train.py for cifar-10. Note that I previously run make_datasets.py
$ python train.py --dataset=cifar10 Epoch 0: 0%| | 0/3125 [00:00<?, ?it/s]Traceback (most recent call last): File "train.py", line 256, in <module> model = train(args) File "train.py", line 174, in train for inputs, _ in tqdm(dataloader): File "/home/exx/anaconda3/lib/python3.7/site-packages/tqdm/_tqdm.py", line 1022, in __iter__ for obj in iterable: File "/home/exx/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 615, in __next__ batch = self.collate_fn([self.dataset[i] for i in indices]) File "/home/exx/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 615, in <listcomp> batch = self.collate_fn([self.dataset[i] for i in indices]) File "/home/exx/anaconda3/lib/python3.7/site-packages/torchvision/datasets/cifar.py", line 124, in __getitem__ img = self.transform(img) File "/home/exx/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 60, in __call__ img = t(img) File "/home/exx/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 736, in __call__ if tensor.size(0) * tensor.size(1) * tensor.size(2) != self.transformation_matrix.size(0): RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
It looks like the input tensor (the image) has to be in shape channel x width x height. However, this code may take only grayscale images (so just 1 channel).
I am getting the error described below when runing train.py for cifar-10. Note that I previously run make_datasets.py
$ python train.py --dataset=cifar10 Epoch 0: 0%| | 0/3125 [00:00<?, ?it/s]Traceback (most recent call last): File "train.py", line 256, in <module> model = train(args) File "train.py", line 174, in train for inputs, _ in tqdm(dataloader): File "/home/exx/anaconda3/lib/python3.7/site-packages/tqdm/_tqdm.py", line 1022, in __iter__ for obj in iterable: File "/home/exx/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 615, in __next__ batch = self.collate_fn([self.dataset[i] for i in indices]) File "/home/exx/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 615, in <listcomp> batch = self.collate_fn([self.dataset[i] for i in indices]) File "/home/exx/anaconda3/lib/python3.7/site-packages/torchvision/datasets/cifar.py", line 124, in __getitem__ img = self.transform(img) File "/home/exx/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 60, in __call__ img = t(img) File "/home/exx/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 736, in __call__ if tensor.size(0) * tensor.size(1) * tensor.size(2) != self.transformation_matrix.size(0): RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)