Closed rperdon closed 4 years ago
Most likely the name of one of your variables is wrong. Make sure your dataset folder, weights, e_hat, etc has the correct name when declared.
I haven't touched the code yet, I just downloaded the git repo and tried running it.
You have to touch the code. The necessary paths are declared in several of the scripts.
I have all the correct paths now. I renamed the vox2celeb folder from dev to mp4 which gave the initial error. I pulled out the mp4 directory of the vox2celeb set into the main path of the project. I have all the appropriate files in the correct folders.
path_to_chkpt = 'model_weights.tar' path_to_backup = 'backup_model_weights.tar' #missing this file, but it is a backup dataset = VidDataSet(K=8, path_to_mp4 = 'mp4', device=device)
criterionG = LossG(VGGFace_body_path='Pytorch_VGGFACE_IR.py', VGGFace_weight_path='Pytorch_VGGFACE.pth', device=device)
After confirming the correct folders and path to vox2celeb, I now get this error:
RuntimeError: Error(s) in loading state_dict for Discriminator: size mismatch for W_i: copying a param with shape torch.Size([512, 145740]) from checkpoint, the shape in current model is torch.Size([512, 1092009]).
So is this an indication of a problem with one of the files from the vox2celeb set? I get the same error from the test data set from vox2celeb as well.
You might be using the old trained weights with the save_disc branch
If you want, delete your old model weights and retry
I've deleted the model_weights.tar file and am re-running train.py.
deleting the old model worked.
File "train.py", line 91, in
for i_batch, (f_lm, x, g_y, i) in enumerate(dataLoader, start=i_batch_current):
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/shared/Realistic-Neural-Talking-Head-Models-master/dataset/dataset_class.py", line 42, in getitem
frame_mark = frame_mark.transpose(2,4).to(self.device) #K,2,3,224,224
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 2)
Downloaded the vggceleb2 set and tried to run the train.py. Kept getting this error. Tried this on the test vggset and same error.
Also when running webcam_inference, getting only a black screen on the fake image.