AliaksandrSiarohin / monkey-net

Animating Arbitrary Objects via Deep Motion Transfer
473 stars 81 forks source link

How to make motion transfer demo working for frame by frame prediction? #26

Closed kenmbkr closed 4 years ago

kenmbkr commented 4 years ago

Sometimes my video is too large to fit in the GPU and I would like to predict frame by frame.

I changed lines 64-70 of demo.py to the code below but the output video looks static. I checked the values between consecutive frames and they do have subtle differences. Please kindly advise how to modify the code to predict frame by frame.

driving_video = torch.from_numpy(driving_video).unsqueeze(0)
source_image = driving_video[:, :, 0].unsqueeze(2)
out_video_batch = []
for frame_idx in range(driving_video.shape[2]):
    driving_frame = driving_video[:, :, frame_idx, :, :].unsqueeze(2)
    out = transfer_one(generator, kp_detector, source_image, driving_frame, config['transfer_params'])
    out_video_batch.append(torch.squeeze(out['video_prediction']).permute(1, 2, 0).data.cpu().numpy())
AliaksandrSiarohin commented 4 years ago

So the simple hack would be something like this:

driving_video = torch.from_numpy(driving_video).unsqueeze(0)
source_image = driving_video[:, :, 0].unsqueeze(2)
out_video_batch = []
for frame_idx in range(driving_video.shape[2]):
    driving_frame = driving_video[:, :, [0,frame_idx], :, :]
    out = transfer_one(generator, kp_detector, source_image, driving_frame, config['transfer_params'])[:,1]
    out_video_batch.append(torch.squeeze(out['video_prediction']).permute(1, 2, 0).data.cpu().numpy())

So basically you need first frame for relative transfer