kwotsin / mimicry

[CVPR 2020 Workshop] A PyTorch GAN library that reproduces research results for popular GANs.
MIT License
601 stars 62 forks source link

bugs in README.md and Documentation about evaluate #40

Closed rainbowtp closed 2 years ago

rainbowtp commented 3 years ago

The Argument which is in torch_mimicry.metrics.evaluate() should be dataset ,but not dataset_name

err: fid_score() got an unexpected keyword argument 'dataset_name'

README.md:

import torch
import torch.optim as optim
import torch_mimicry as mmc
from torch_mimicry.nets import sngan

# Data handling objects
... ...

# Start training
... ...

# Evaluate fid
mmc.metrics.evaluate(
    metric='fid',
    log_dir='./log/example',
    netG=netG,
    dataset_name='cifar10',     # should be dataset='cifar10'
    num_real_samples=50000,
    num_fake_samples=50000,
    evaluate_step=100000,
    device=device)

Documentation: # Evaluate fid

I think that's why: argument: dataset

def fid_score(num_real_samples,
              num_fake_samples,
              netG,
              dataset, #  The argument which is in torch_mimicry.metrics.evaluate() should be same as this one
              seed=0,
              device=None,
              batch_size=50,
              verbose=True,
              stats_file=None,
              log_dir='./log'):
SuperbTUM commented 2 years ago

Agreed. Also, I am not sure what the attribute stats_file means, since it is mandatory if a custom dataset is applied.

kwotsin commented 2 years ago

Thanks for raising this issue, I updated the library some time back but I forgot to update the README so it was still using an old argument that is not valid anymore. I've fixed this issue in a recent PR linked.

Please reinstall with pip install git+https://github.com/kwotsin/mimicry.git for the latest version, thank you!