mike9251 / simswap-inference-pytorch

Unofficial Pytorch implementation (inference only) of the SimSwap: An Efficient Framework For High Fidelity Face Swapping
93 stars 20 forks source link
deepfake gfpgan pytorch simswap

Unofficial Pytorch implementation (inference only) of the SimSwap: An Efficient Framework For High Fidelity Face Swapping

Updates

Attention

This project is for technical and academic use only. Please do not apply it to illegal and unethical scenarios.

In the event of violation of the legal and ethical requirements of the user's country or region, this code repository is exempt from liability.

Preparation

Installation

# clone project
git clone https://github.com/mike9251/simswap-inference-pytorch
cd simswap-inference-pytorch

# [OPTIONAL] create conda environment
conda create -n myenv python=3.9
conda activate myenv

# install pytorch and torchvision according to instructions
# https://pytorch.org/get-started/

# install requirements
pip install -r requirements.txt

Important

Face detection will be performed on CPU. To run it on GPU you need to install onnx gpu runtime:

pip install onnxruntime-gpu==1.11.1

and modify one line of code in ...Anaconda3\envs\myenv\Lib\site-packages\insightface\model_zoo\model_zoo.py

Here, instead of passing None as the second argument to the onnx inference session

class ModelRouter:
    def __init__(self, onnx_file):
        self.onnx_file = onnx_file

    def get_model(self):
        session = onnxruntime.InferenceSession(self.onnx_file, None)
        input_cfg = session.get_inputs()[0]

pass a list of providers

class ModelRouter:
    def __init__(self, onnx_file):
        self.onnx_file = onnx_file

    def get_model(self):
        session = onnxruntime.InferenceSession(self.onnx_file, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
        input_cfg = session.get_inputs()[0]

Otherwise simply use CPU onnx runtime with only a minor performance drop.

Weights

Weights for all models get downloaded automatically.

You can also download weights manually and put inside weights folder:

Inference

Web App

streamlit run app_web.py

Command line App

This repository supports inference in several modes, which can be easily configured with config files in the configs folder.

Config files contain two main parts:

Overriding parameters with CMD

Every parameter in a config file can be overridden by specifying it directly with CMD. For example:

python app.py --config-name=run_image.yaml data.specific_id_image="path/to/the/image" pipeline.erosion_kernel_size=20

Video

Official 224x224 model, face alignment "none" [![Video](https://i.imgur.com/iCujdRB.jpg)](https://vimeo.com/728346715)
Official 224x224 model, face alignment "ffhq" [![Video](https://i.imgur.com/48hjJO4.jpg)](https://vimeo.com/728348520)
Unofficial 512x512 model, face alignment "none" [![Video](https://i.imgur.com/rRltD4U.jpg)](https://vimeo.com/728346542)
Unofficial 512x512 model, face alignment "ffhq" [![Video](https://i.imgur.com/gFkpyXS.jpg)](https://vimeo.com/728349219)

License

For academic and non-commercial use only.The whole project is under the CC-BY-NC 4.0 license. See LICENSE for additional details.

Acknowledgements