Closed wanshun123 closed 5 years ago
No. You could simply run it on 10 in eval mode (with torch.no_grad()), save the result and then run it on the next 9. Then you could post hoc do the evaluation or whatever you need.
If you want to do this for training then yes you would probably need to do some refactoring if it doesn't fit in memory.
Thanks for uploading the pretrained model and notebooks. I'm testing Face2Face_UnwrapMosaic.ipynb to try to drive facial expressions with another face and am getting the following error on the
result = run_batch(source_images, driver_images)
line if I try to use more than 19 driving images (on 1 source image):My GPU is a Quadro M4000 and I'm testing on driving images of 256x256 (source image taken from example in this repo). Is significant refactoring required to run on several hundred driving images (ie. every frame of a short video)?