Closed smittal10 closed 2 years ago
It expects all images in a folder to be of the same size currently. This is because then the user has to choose how to handle images of different size (croping? resizing?). I recommend taking square crops, otherwise the images will get distorted when they are processed by the network (which resizes images to 299x299).
I'm using the pip package for pytorch-fid. I get this error when I run the following command. The two folders contain variable image sizes, but wouldn't the dataset util handle this? python -m pytorch_fid path/to/dataset1 path/to/dataset2
Error Log: Traceback (most recent call last): File "/opt/conda/envs/mindalle/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/mindalle/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/pytorch_fid/main.py", line 3, in
pytorch_fid.fid_score.main()
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/pytorch_fid/fid_score.py", line 279, in main
fid_value = calculate_fid_given_paths(args.path,
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/pytorch_fid/fid_score.py", line 256, in calculate_fid_given_paths
m1, s1 = compute_statistics_of_path(paths[0], model, batch_size,
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/pytorch_fid/fid_score.py", line 240, in compute_statistics_of_path
m, s = calculate_activation_statistics(files, model, batch_size,
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/pytorch_fid/fid_score.py", line 225, in calculate_activation_statistics
act = get_activations(files, model, batch_size, dims, device, num_workers)
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/pytorch_fid/fid_score.py", line 129, in get_activations
for batch in tqdm(dataloader):
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/tqdm/std.py", line 1180, in iter
for obj in iterable:
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/opt/conda/envs/mindalle/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 480, 640] at entry 0 and [3, 640, 480] at entry 1