Closed zkxcongming closed 2 years ago
In the CelebDFClips
class' __getitem__
method, the sample is cast to torch.Tensor type (line 165 in data/dataset_clips.py) before the transform is applied, and torchvision.transforms.CenterCrop can handle torch.Tensor inputs. Perhaps, the torchvision library version that you have installed is older? You can install using the requirements.txt file to make sure.
Also, have you preprocessed the mouth crops according to the instructions? Do your mouth crops look similar to the examples in the examples directory?
In the
CelebDFClips
class'__getitem__
method, the sample is cast to torch.Tensor type (line 165 in data/dataset_clips.py) before the transform is applied, and torchvision.transforms.CenterCrop can handle torch.Tensor inputs. Perhaps, the torchvision library version that you have installed is older? You can install using the requirements.txt file to make sure.Also, have you preprocessed the mouth crops according to the instructions? Do your mouth crops look similar to the examples in the examples directory?
I updated torchvision and everything works fine now, and got 80.8% on CelebDF test set. It was preprocessing problem. I got landmarks on the cropped faces so it looks different from your examples. Problem solved. Thank you!
I follow your instruction to prepare CelebDF test dataset, trying to validate lipforensics_ff.pth on CelebDF. However I came across:
I noticed that the transform input is np.ndarray but the PIL Image is required, so I substitute the
CenterCrop
withVideoCenterCrop
:Then run
It works, but I got only AUC of 57.7% on CelebDF. I am wondering which step did I get wrong. I really appreciate it if you could give me some advice. Thank you!