sergeytulyakov / mocogan

MoCoGAN: Decomposing Motion and Content for Video Generation
578 stars 114 forks source link

EOFError: Ran out of input #20

Open momo1986 opened 5 years ago

momo1986 commented 5 years ago

I try to use it in Python3.

However, the error is reported:

python train.py --image_batch 32 --video_batch 32 --use_infogan --use_noise --noise_sigma 0.1 --image_discriminator PatchImageDiscriminator --video_discriminator CategoricalVideoDiscriminator --print_every 100 --every_nth 2 --dim_z_content 50 --dim_z_motion 10 --dim_z_category 4 /slow/junyan/VideoSynthesis/mocogan/data/actions logs/actions {'--batches': '100000', '--dim_z_category': '4', '--dim_z_content': '50', '--dim_z_motion': '10', '--every_nth': '2', '--image_batch': '32', '--image_dataset': '', '--image_discriminator': 'PatchImageDiscriminator', '--image_size': '64', '--n_channels': '3', '--noise_sigma': '0.1', '--print_every': '100', '--use_categories': False, '--use_infogan': True, '--use_noise': True, '--video_batch': '32', '--video_discriminator': 'CategoricalVideoDiscriminator', '--video_length': '16', '': '/slow/junyan/VideoSynthesis/mocogan/data/actions', '': 'logs/actions'} /root/anaconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py:188: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. "please use transforms.Resize instead.") /slow/junyan/VideoSynthesis/mocogan/data/actions/local.db Traceback (most recent call last): File "train.py", line 104, in dataset = data.VideoFolderDataset(args[''], cache=os.path.join(args[''], 'local.db')) File "/slow/junyan/VideoSynthesis/mocogan/src/data.py", line 24, in init print(pickle.load(f)) EOFError: Ran out of input

Here is the code

class VideoFolderDataset(torch.utils.data.Dataset):
    def __init__(self, folder, cache, min_len=32):
        dataset = ImageFolder(folder)
        self.total_frames = 0
        self.lengths = []
        self.images = []
        print(cache)
        if cache is not None and os.path.exists(cache):
            with open(cache, 'rb') as f:
                print(pickle.load(f))
        else:
            for idx, (im, categ) in enumerate(
                    tqdm.tqdm(dataset, desc="Counting total number of frames")):
                img_path, _ = dataset.imgs[idx]
                shorter, longer = min(im.width, im.height), max(im.width, im.height)
                length = longer // shorter
                if length >= min_len:
                    self.images.append((img_path, categ))
                    self.lengths.append(length)

            if cache is not None:
                with open(cache, 'wb') as f:
                    pickle.dump((self.images, self.lengths), f)

        self.cumsum = np.cumsum([0] + self.lengths)
        print("Total number of frames {}".format(np.sum(self.lengths)))
Aniket1998 commented 5 years ago

Facing a similar issue for Weizmann Action Dataset on batch sizes larger than 64

vladyushchenko commented 5 years ago

The accepted batch size depends on the dataset and your config. Weizmann Action Dataset has 72 videos and since the drop_last=True in image loader and in video_loader, the max batch size is the dataset length.

To solve the issue, you can duplicate the data to cover your needed batch size (e.g. batch_size = 128, 72*2 > 128). Note that simply setting drop_last=False will not solve your issue.

disanda commented 4 years ago

I solve the problem by edit the file of 'data.py' at line 22. from:' if cache is not None and os.path.exitsis(cache): to :'if (cache is not None) and (os.path.getsize(cache) != 0):'

because: the cache file maybe is a 0 byte files and meanwhile, it can not open to write