Open mh-nyris opened 4 years ago
I've also encounted this issue, my solution is to transform before dataloader:
dataset = ImagePathDataset(files,
transforms=TF.Compose([
TF.Resize((299,299)),
TF.ToTensor(),
]))
Thank you for the solution. Here is the reference architecture of the InceptionV3 model
I observe that such resize operation would change the FID score significantly.
I've also encounted this issue, my solution is to transform before dataloader:
dataset = ImagePathDataset(files, transforms=TF.Compose([ TF.Resize((299,299)), TF.ToTensor(), ]))
The InceptionV3 model in PyTorch is trained on ImageNet. According to the official documentation, it uses the mean and standard deviation of ImageNet for normalization. However, if you directly use ToTensor()
, it will only normalize the values to the range 0-1.
I've also encounted this issue, my solution is to transform before dataloader:
dataset = ImagePathDataset(files, transforms=TF.Compose([ TF.Resize((299,299)), TF.ToTensor(), ]))
The InceptionV3 model in PyTorch is trained on ImageNet. According to the official documentation, it uses the mean and standard deviation of ImageNet for normalization. However, if you directly use
ToTensor()
, it will only normalize the values to the range 0-1.
https://pytorch.org/vision/stable/models/generated/torchvision.models.inception_v3.html
Hey there, I ran into problems as my images are of variable size (couldn't convert the list read-in to np.array). Does it make sense to consider resizing images in imread? Image.open(filename).resize((299, 299)) Best, Mike