Closed jonatelintelo closed 1 year ago
Hi, thanks for the report. Which version do you use, and does it produce any stack trace? If the data is not sensitive, can you provide a reproducible case?
Hi, thanks for the report. Which version do you use, and does it produce any stack trace? If the data is not sensitive, can you provide a reproducible case?
I am using torch-fidelity 0.3.0 and Scipy 1.11.3
The stack trace:
/ceph/csedu-scratch/project/jlintelo/venv/lib/python3.10/site-packages/torch_fidelity/datasets.py:16: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(img.tobytes())).view(height, width, 3)
Traceback (most recent call last):
File "/ceph/csedu-scratch/project/jlintelo/stargan/eval.py", line 121, in <module>
eval(config)
File "/ceph/csedu-scratch/project/jlintelo/stargan/eval.py", line 57, in eval
get_scores(config.dir, f'sponge{model}')
File "/ceph/csedu-scratch/project/jlintelo/stargan/get_scores.py", line 81, in get_scores
metrics = evaluate_metrics_matrix(root, sponge_model, kid_size)
File "/ceph/csedu-scratch/project/jlintelo/stargan/get_scores.py", line 69, in evaluate_metrics_matrix
metrics = evaluate_metrics(os.path.join(root, 'real'), os.path.join(root, sponge_model, 'eval', b), kid_size)
File "/ceph/csedu-scratch/project/jlintelo/stargan/get_scores.py", line 39, in evaluate_metrics
return torch_fidelity.calculate_metrics(
File "/ceph/csedu-scratch/project/jlintelo/venv/lib/python3.10/site-packages/torch_fidelity/metrics.py", line 258, in calculate_metrics
metric_fid = fid_statistics_to_metric(fid_stats_1, fid_stats_2, get_kwarg('verbose', kwargs))
File "/ceph/csedu-scratch/project/jlintelo/venv/lib/python3.10/site-packages/torch_fidelity/metric_fid.py", line 60, in fid_statistics_to_metric
assert False, 'Imaginary component {}'.format(m)
AssertionError: Imaginary component 0.031847303159878625
The data is not sensitive, it is just the standard CelebA for StarGAN, but I do not know how I could provide the data or a reproducible example to you easily.
I am trying to calculate FID for some GAN generated images.
What I do in evaluate_metrics(os.path.join(root, 'real'), os.path.join(root, sponge_model, 'eval', b), kid_size)
is: I pass a directory containing the original .png images for the first argument, a directory with faked .png images from a GAN, and lastly the desired subset size = 100.
Ideally I would want this to work with 2000 images and not 2048. As my test set is 2000 images.
Could you please try the master version and see if the issue persists? This exact aspect of metric computation was changed since 0.3.0.
Here is how you can install master branch:
pip install -e git+https://github.com/toshas/torch-fidelity.git@master#egg=torch-fidelity
Could you please try the master version and see if the issue persists? This exact aspect of metric computation was changed since 0.3.0.
Here is how you can install master branch:
pip install -e git+https://github.com/toshas/torch-fidelity.git@master#egg=torch-fidelity
I have used this version and reinstalled my venv completely with all packages required. It seems to work now!
Does this implementation version still require at least 2048 images for FID to ensure results reflect visual quality?
Great stuff and thanks a lot.
There is no limit for 2048 images, as far as I remember. Let me know if there is anything else I can do, and stay tuned for the 0.4.0 release!
There is no limit for 2048 images, as far as I remember. Let me know if there is anything else I can do, and stay tuned for the 0.4.0 release!
Hi, just to make sure. By "no limit for 2048" you mean that the FID calculation should still be reliable for datasets containing less than 2048 images.
Could you please try the master version and see if the issue persists? This exact aspect of metric computation was changed since 0.3.0.
Here is how you can install master branch:
pip install -e git+https://github.com/toshas/torch-fidelity.git@master#egg=torch-fidelity
Hey @toshas, Even I am getting the same error
rank0: Traceback (most recent call last):
rank0: File "/netscratch/lahoti/diffusers/examples/unconditional_image_generation/train_unconditional.py", line 759, in
rank0: File "/netscratch/lahoti/diffusers/examples/unconditional_image_generation/train_unconditional.py", line 693, in main rank0: metrics_dict = torch_fidelity.calculate_metrics( rank0: File "/opt/conda/lib/python3.9/site-packages/torch_fidelity/metrics.py", line 258, in calculate_metrics rank0: metric_fid = fid_statistics_to_metric(fid_stats_1, fid_stats_2, get_kwarg('verbose', kwargs)) rank0: File "/opt/conda/lib/python3.9/site-packages/torch_fidelity/metric_fid.py", line 60, in fid_statistics_to_metric rank0: assert False, 'Imaginary component {}'.format(m) rank0: AssertionError: Imaginary component 3.109008202826778e+142
I tried installing the master version as suggested pip install -e git+https://github.com/toshas/torch-fidelity.git@master#egg=torch-fidelity
But still getting the error. I have added this command in task prolog as I am running the script on conatiner Can you help me out here @toshas ?
Hi,
when trying to calculate the FID score for images between two directories as inputs, the following error is given:
AssertionError: Imaginary component 2.0889527537971523e+111
I tried fixes from other implementations of FID, but increasing the dataset size to be over 2048 and different scipy versions are not a fix for me.
Is there any clue what else might be causing this?