Closed netrunner-exe closed 2 years ago
My output looks like this way, do you meet similar problems?
THANK YOU SO MUCH FOR PROVIDING THESE CODES !!
Really thank you for your great work! It works well
Hi, as you said, _Also if you change mode = 'ffhq' to mode = 'None' in test_wholeimage_swapsingle and test_videoswapsingle. It looks more natural.
I am confused, as you said, ffhq_face_aligned was used when you trained the model, arc_face_align will be better than ffhq_face_align when you test the model ?
Hi, as you said, _Also if you change mode = 'ffhq' to mode = 'None' in test_wholeimage_swapsingle and test_videoswapsingle. It looks more natural.
I am confused, as you said, ffhq_face_aligned was used when you trained the model, arc_face_align will be better than ffhq_face_align when you test the model ?
I don't think I said anywhere that I used ffhq_face_aligned to train the model (as you say). Moreover, I didn't train the model at all, but used a model that another user posted for the test. In this case, 'none' or 'ffhq' means mode – exactly how to crop and align the face before sending it to further processing. Which mode to use depends on how you cropped and aligned the dataset on which the model was trained.
Hi, as you said, _Also if you change mode = 'ffhq' to mode = 'None' in test_wholeimage_swapsingle and test_videoswapsingle. It looks more natural. I am confused, as you said, ffhq_face_aligned was used when you trained the model, arc_face_align will be better than ffhq_face_align when you test the model ?
I don't think I said anywhere that I used ffhq_face_aligned to train the model (as you say). Moreover, I didn't train the model at all, but used a model that another user posted for the test. In this case, 'none' or 'ffhq' means mode – exactly how to crop and align the face before sending it to further processing. Which mode to use depends on how you cropped and aligned the dataset on which the model was trained.
I see, maybe the model that another user posted was trained using arc_face_align.
Hi all. I did a little research in order to make the test code compatible with the new training model. I really hope that @neuralchen or @NNNNAI based on this research will make the necessary adaptation of the code in the repository to make everything work perfectly! Also many thanks to @boreas-l for the idea and hints on how to implement it. Some points were not able to make it work, please improve it to work properly!
SimSwap/options/test_options.py
SimSwap/util/swap_new_model.py
test_wholeimage_swapsingle.py
as an example. Making small changes to work with the new model and compatibility with the old ones. The only point: if you are using the beta 512 model, you will need to add--name 512
instead of only--crop_size 512
to make the beta 512 model work in the future.def lcm(a, b): return abs(a * b) / fractions.gcd(a, b) if a and b else 0
transformer_Arcface = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ])
def _totensor(array): tensor = torch.from_numpy(array) img = tensor.transpose(0, 1).transpose(0, 2).contiguous() return img.float().div(255)
if name == 'main': opt = TestOptions().parse() start_epoch, epoch_iter = 1, 0 crop_size = opt.crop_size
''' Author: Naiyuan liu Github: https://github.com/NNNNAI Date: 2021-11-23 17:03:58 LastEditors: Naiyuan liu LastEditTime: 2021-11-24 19:00:38 Description: ''' import cv2 import torch import fractions import numpy as np from PIL import Image import torch.nn.functional as F from torchvision import transforms from models.models import create_model from models.projected_model import fsModel from options.test_options import TestOptions from insightface_func.face_detect_crop_single import Face_detect_crop from util.videoswap import video_swap import os
def lcm(a, b): return abs(a * b) / fractions.gcd(a, b) if a and b else 0
transformer = transforms.Compose([ transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
transformer_Arcface = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ])
detransformer = transforms.Compose([
transforms.Normalize([0, 0, 0], [1/0.229, 1/0.224, 1/0.225]),
transforms.Normalize([-0.485, -0.456, -0.406], [1, 1, 1])
])
if name == 'main': opt = TestOptions().parse() start_epoch, epoch_iter = 1, 0 crop_size = opt.crop_size
''' Author: Naiyuan liu Github: https://github.com/NNNNAI Date: 2021-11-23 17:03:58 LastEditors: Naiyuan liu LastEditTime: 2021-11-24 19:19:52 Description: ''' import os import cv2 import glob import torch import shutil import numpy as np from tqdm import tqdm from util.reverse2original import reverse2wholeimage import moviepy.editor as mp from moviepy.editor import AudioFileClip, VideoFileClip from moviepy.video.io.ImageSequenceClip import ImageSequenceClip import time from util.add_watermark import watermark_image from util.norm import SpecificNorm from util.swap_new_model import swap_result_new_model from parsing_model.model import BiSeNet
def _totensor(array): tensor = torch.from_numpy(array) img = tensor.transpose(0, 1).transpose(0, 2).contiguous() return img.float().div(255)
def video_swap(video_path, id_vetor, swap_model, detect_model, save_path, temp_results_dir='./temp_results', crop_size=224, no_simswaplogo=False, use_mask=False, new_model=False): video_forcheck = VideoFileClip(video_path) if video_forcheck.audio is None: no_audio = True else: no_audio = False