odegeasslbc / FastGAN-pytorch

Official implementation of the paper "Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis" in ICLR 2021
GNU General Public License v3.0
606 stars 100 forks source link

Bug Report: FileNotFound, for 60000.pth when using default instructions in readme.md #51

Open haltingstate opened 1 year ago

haltingstate commented 1 year ago

FileNotFoundError: [Errno 2] No such file or directory: './models/60000.pth'

For both

python hon eval.py --n_sample 1

and

python eval.py --n_sample 100 --start_iter 0 --end_iter 50000
Traceback (most recent call last):
  File "/home/ml1/FastGAN-pytorch/train_results/test1/eval.py", line 67, in <module>
    checkpoint = torch.load(ckpt, map_location=lambda a,b: a)
  File "/home/ml1/anaconda3/lib/python3.9/site-packages/torch/serialization.py", line 699, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/ml1/anaconda3/lib/python3.9/site-packages/torch/serialization.py", line 230, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/ml1/anaconda3/lib/python3.9/site-packages/torch/serialization.py", line 211, in init
    super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: './models/60000.pth'

Appears to be a bug. No range check?

werowe commented 1 year ago

I got same error.

godisme1220 commented 4 months ago

see these lines in eval.py parser.add_argument('--start_iter', type=int, default=6) parser.add_argument('--end_iter', type=int, default=10) parser.add_argument('--multiplier', type=int, default=10000, help='multiplier for model number')

I also encounter same problem, it seems like the author want us to use the pth files starts from 60000 iterations .pth file, but I only train like 5000 iters.

so I change the readme.md generate image cmd to: python eval.py --start_iter 5 --end_iter 5 --n_sample 16 --multiplier 1000

start to scan the iteration=5 1000, end to scan at iterations=5 1000

then the bug fix, images does generated.

Best regards, Eugene.

godisme1220 commented 4 months ago

FileNotFoundError: [Errno 2] No such file or directory: './models/60000.pth'

For both

python hon eval.py --n_sample 1

and

python eval.py --n_sample 100 --start_iter 0 --end_iter 50000
Traceback (most recent call last):
  File "/home/ml1/FastGAN-pytorch/train_results/test1/eval.py", line 67, in <module>
    checkpoint = torch.load(ckpt, map_location=lambda a,b: a)
  File "/home/ml1/anaconda3/lib/python3.9/site-packages/torch/serialization.py", line 699, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/ml1/anaconda3/lib/python3.9/site-packages/torch/serialization.py", line 230, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/ml1/anaconda3/lib/python3.9/site-packages/torch/serialization.py", line 211, in init
    super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: './models/60000.pth'

Appears to be a bug. No range check?

check my solution, hope it helps.

pranavM2703 commented 2 days ago

Please open eval.py look at the arg function Please refer to my implementation it also is capable of handling such errors: import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader from torchvision import utils as vutils

import os import random import argparse from tqdm import tqdm

from models import Generator

def load_params(model, new_param): for p, new_p in zip(model.parameters(), newparam): p.data.copy(new_p)

def resize(img,size=256): return F.interpolate(img, size=size)

def batch_generate(zs, netG, batch=8): g_images = [] with torch.no_grad(): for i in range(len(zs)//batch): g_images.append( netG(zs[ibatch:(i+1)batch]).cpu() ) if len(zs)%batch>0: g_images.append( netG(zs[-(len(zs)%batch):]).cpu() ) return torch.cat(g_images)

def batch_save(images, folder_name): if not os.path.exists(folder_name): os.mkdir(folder_name) for i, image in enumerate(images): vutils.save_image(image.add(1).mul(0.5), folder_name+'/%d.jpg'%i)

if name == "main": parser = argparse.ArgumentParser( description='generate images' ) parser.add_argument('--ckpt', type=str) parser.add_argument('--artifacts', type=str, default=".", help='path to artifacts.') parser.add_argument('--cuda', type=int, default=0, help='index of gpu to use') parser.add_argument('--start_iter', type=int, default=5) parser.add_argument('--end_iter', type=int, default=10)

parser.add_argument('--dist', type=str, default='.')
parser.add_argument('--size', type=int, default=256)
parser.add_argument('--batch', default=16, type=int, help='batch size')
parser.add_argument('--n_sample', type=int, default=2000)
parser.add_argument('--big', action='store_true')
parser.add_argument('--im_size', type=int, default=1024)
parser.add_argument('--multiplier', type=int, default=10000, help='multiplier for model number')
parser.set_defaults(big=False)
args = parser.parse_args()

noise_dim = 256
device = torch.device('cuda:%d'%(args.cuda))

net_ig = Generator( ngf=64, nz=noise_dim, nc=3, im_size=args.im_size)#, big=args.big )
net_ig.to(device)

for epoch in [args.multiplier*i for i in range(args.start_iter, args.end_iter+1)]:
    ckpt = f"{args.artifacts}/models/{epoch}.pth"
    if(os.path.exists(ckpt)==False):
        print("Does not exist",epoch)
        continue
    checkpoint = torch.load(ckpt, map_location=lambda a,b: a)
    # Remove prefix `module`.
    checkpoint['g'] = {k.replace('module.', ''): v for k, v in checkpoint['g'].items()}
    net_ig.load_state_dict(checkpoint['g'])
    #load_params(net_ig, checkpoint['g_ema'])

    #net_ig.eval()
    print('load checkpoint success, epoch %d'%epoch)

    net_ig.to(device)

    del checkpoint

    dist = 'eval_%d'%(epoch)
    dist = os.path.join(dist, 'img')
    os.makedirs(dist, exist_ok=True)

    with torch.no_grad():
        for i in tqdm(range(args.n_sample//args.batch)):
            noise = torch.randn(args.batch, noise_dim).to(device)
            g_imgs = net_ig(noise)[0]
            g_imgs = resize(g_imgs,args.im_size) # resize the image using given dimension
            for j, g_img in enumerate( g_imgs ):
                vutils.save_image(g_img.add(1).mul(0.5), 
                    os.path.join(dist, '%d.png'%(i*args.batch+j)))#, normalize=True, range=(-1,1))