💡 We also provide [中文文档 / CHINESE DOC] and [한국어 문서 / KOREAN DOC]. We very welcome and appreciate your contributions to this project.
Before getting started, please place the ImageNet-1K pretrained weight files in the ./pre_model
directory. The download links for the weights are provided below:
RepLKNet: https://drive.google.com/file/d/1vo-P3XB6mRLUeDzmgv90dOu73uCeLfZN/view?usp=sharing
ConvNeXt: https://dl.fbaipublicfiles.com/convnext/convnext_base_1k_384.pth
Place the training-set (*.txt) file, validation-set (*.txt) file, and label (*.txt) file required for training in the dataset folder and name them with the same file name (there are various txt examples under dataset)
For the two models (RepLKNet and ConvNeXt) used, the following parameters need to be changed in main_train.py
:
# For RepLKNet
cfg.network.name = 'replknet'; cfg.train.batch_size = 16
# For ConvNeXt
cfg.network.name = 'convnext'; cfg.train.batch_size = 24
bash main.sh
CUDA_VISIBLE_DEVICES=0 python main_train_single_gpu.py
Replace the ConvNeXt trained model path and the RepLKNet trained model path in merge.py
, and execute python merge.py
to obtain the final inference test model.
The following example uses the POST request interface to request the image path as the request parameter, and the response output is the deepfake score predicted by the model.
#!/usr/bin/env python
# -*- coding:utf-8 -*-
import requests
import json
import requests
import json
header = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'
}
url = 'http://ip:10005/inter_api'
image_path = './dataset/val_dataset/51aa9b8d0da890cd1d0c5029e3d89e3c.jpg'
data_map = {'img_path':image_path}
response = requests.post(url, data=json.dumps(data_map), headers=header)
content = response.content
print(json.loads(content))
sudo docker build -t vision-rush-image:1.0.1 --network host .
sudo docker run -d --name vision_rush_image --gpus=all --net host vision-rush-image:1.0.1