because this model has ability to do background isolation - this seems to make it trivial to isolate the background for green screen purpose. This is big - because to swap in an generated gan image into a video - you generally get artifacts / boundary borders / or a box when you drop in a generated image.
in other words - if you take a video - run it through ffmpeg - to get all the frames -
run a face detection pass - then have sofgan spit out updated image - but run a background isolation/ green screening- you could have a high quality replacement face.
because this model has ability to do background isolation - this seems to make it trivial to isolate the background for green screen purpose. This is big - because to swap in an generated gan image into a video - you generally get artifacts / boundary borders / or a box when you drop in a generated image.
in other words - if you take a video - run it through ffmpeg - to get all the frames - run a face detection pass - then have sofgan spit out updated image - but run a background isolation/ green screening- you could have a high quality replacement face.
fyi - @norod