SoftwareGift / FeatherNets_Face-Anti-spoofing-Attack-Detection-Challenge-CVPR2019

Code for 3rd Place Solution in Face Anti-spoofing Attack Detection Challenge @ CVPR2019,model only 0.35M!!! 1.88ms(CPU)
Other
928 stars 284 forks source link

请问一下,输入一张人脸如何预测是否为真人照片呢 ? #72

Open zhishao opened 4 years ago

zhishao commented 4 years ago

将图片输入feathernet网络得到1024维向量, 之后怎么做呢?

SoftwareGift commented 4 years ago

image 看测试的代码,其实和普通的预测代码没有多大区别,也可以打印一下网络的输出也就知道了。

zhishao commented 4 years ago

谢谢您!但是我把real和fake都试了几张,输出的都试[0]。

image1 = Image.open('./image/f01.jpg') image1 = transform(image1).unsqueeze(0) output1 = net(image1) soft_output = torch.softmax(output1, dim=-1) preds = softoutput.to('cpu').detach().numpy() , predicted = torch.max(soft_output.data, 1) predicted = predicted.to('cpu').detach().numpy() print(predicted)

msvargas commented 4 years ago

Its amazing its working! thanks i use these code: return 1 to fake photo and 0 in real face [face cropped required]

the next code print output:

./images/fake.jpg: FAKE!
./images/real.jpg: REAL

PDTA: I could export to tensorflow.js, some recomendation? thanks

import sys
sys.path.insert(0,'.')
sys.path.insert(0,'..')
import torch
import tensorflow as tf
from pytorch2keras.converter import pytorch_to_keras
from tensorflow.keras.models import Model
from models import FeatherNet
from torchsummary import summary
from PIL import Image
import torchvision.transforms as transforms

def check_spoofing(image_path):
    image1 = Image.open(image_path)
    image1 = transform(image1).unsqueeze(0)
    output1 = model(image1)
    #print(output1)
    soft_output = torch.softmax(output1, dim=-1)
    #print(soft_output,soft_output.data)
    preds = soft_output.to('cpu').detach().numpy()
    #print(preds)
    _, predicted = torch.max(soft_output.data, 1)
    predicted = predicted.to('cpu').detach().numpy()
    if(predicted[0]==1):
        print(image_path+": FAKE!")
    else:
        print(image_path + ": REAL")

if __name__=='__main__':
    input_size=224
    img_size=224
    ratio = 224.0 / float(img_size)
    global normalize, transform
     # Data loading code
    normalize = transforms.Normalize(mean=[0.14300402, 0.1434545, 0.14277956],  ##accorcoding to casia-surf val to commpute
                                     std=[0.10050353, 0.100842826, 0.10034215])
    name = 'FeatherNetB'
    model = FeatherNet(se = True,avgdown=True)
    #print(model)
    model_path = './checkpoints/FeatherNetB_bs32/_47_best.pth.tar'
    checkpoint = torch.load(model_path,map_location = 'cpu')
    print('load model:',model_path)
    model_dict = {}
    state_dict = model.state_dict()
    #print(checkpoint)
    for (k,v) in checkpoint['state_dict'].items():
        print(k)
        if k[7:] in state_dict:
            model_dict[k[7:]] = v
    state_dict.update(model_dict)
    model.load_state_dict(state_dict)
    #model.load_state_dict(checkpoint['state_dict'])
    model.eval()
    transform = transforms.Compose([
        transforms.Resize(int(256 * ratio)),
        transforms.CenterCrop(img_size),
        transforms.ToTensor(),
        normalize,
    ])

    check_spoofing('./images/fake.jpg')
    check_spoofing('./images/real.jpg')
SoftwareGift commented 4 years ago

谢谢您!但是我把real和fake都试了几张,输出的都试[0]。

image1 = Image.open('./image/f01.jpg') image1 = transform(image1).unsqueeze(0) output1 = net(image1) soft_output = torch.softmax(output1, dim=-1) preds = softoutput.to('cpu').detach().numpy() , predicted = torch.max(soft_output.data, 1) predicted = predicted.to('cpu').detach().numpy() print(predicted)

参考楼下的,楼下测试成功了,可能你忘记切换到eval模式了.

zhishao commented 4 years ago

执行过model.eval( ),我换了很多张图仍然全输出0,请问我使用手机对着电脑屏幕上的人脸拍照,然后识别手机拍的这张照片,这种情况可以被识别吗?

wangzhen6309271 commented 4 years ago

@zhishao 兄弟,你好,关于输出全是0的问题你解决了吗?我现在也遇到了同样的问题

zhishao commented 4 years ago

我觉得可能这个模型无法正确识别拍摄屏幕上的人脸吧,我使用了测试集中的照片以及三个预训练的模型测试,结果基本等于猜。

wangzhen6309271 commented 4 years ago

@zhishao 出现过fake的结果吗?能把你的测试集给我一下吗?

S130111 commented 4 years ago

@zhishao 模型训练跑通了现实训练结果很好,但是测试不怎么理想,你解决了吗

Rakesh-Chekuri commented 4 years ago

@punisher97 Can you please share the fake and real images that you used? I am getting real for any image I have tested with.

zhishao commented 4 years ago

@SoftwareGift 作者您好,请问预测时使用的是深度相机得到的深度图 还是 普通RGB图片呢?因为我看了您写的代码,训练时好像只使用了depth和label的数据,是这样吗?

ckcraig01 commented 4 years ago

@SoftwareGift 作者您好 感谢您的分享. 想请教

  1. 使用RGB作为feathernetB的 input是可以的吗? 因为看您的input有三个channel, 还是说depth map 也有三个channel呢?
  2. 想请问按照您上面的叙述, 这样不就只考虑1024 feature vector中的index 0/1而已了吗?
archwine commented 4 years ago

作者提供的其实都是深度的预训练模型(除了FeatherNetB_bs32-ir/_54.pth.tar),没有rgb的,拿rgb来测试自然就和猜的没有两样。

damvantai commented 4 years ago

Its amazing its working! thanks i use these code: return 1 to fake photo and 0 in real face [face cropped required]

the next code print output:

./images/fake.jpg: FAKE!
./images/real.jpg: REAL

PDTA: I could export to tensorflow.js, some recomendation? thanks

import sys
sys.path.insert(0,'.')
sys.path.insert(0,'..')
import torch
import tensorflow as tf
from pytorch2keras.converter import pytorch_to_keras
from tensorflow.keras.models import Model
from models import FeatherNet
from torchsummary import summary
from PIL import Image
import torchvision.transforms as transforms

def check_spoofing(image_path):
    image1 = Image.open(image_path)
    image1 = transform(image1).unsqueeze(0)
    output1 = model(image1)
    #print(output1)
    soft_output = torch.softmax(output1, dim=-1)
    #print(soft_output,soft_output.data)
    preds = soft_output.to('cpu').detach().numpy()
    #print(preds)
    _, predicted = torch.max(soft_output.data, 1)
    predicted = predicted.to('cpu').detach().numpy()
    if(predicted[0]==1):
        print(image_path+": FAKE!")
    else:
        print(image_path + ": REAL")

if __name__=='__main__':
    input_size=224
    img_size=224
    ratio = 224.0 / float(img_size)
    global normalize, transform
     # Data loading code
    normalize = transforms.Normalize(mean=[0.14300402, 0.1434545, 0.14277956],  ##accorcoding to casia-surf val to commpute
                                     std=[0.10050353, 0.100842826, 0.10034215])
    name = 'FeatherNetB'
    model = FeatherNet(se = True,avgdown=True)
    #print(model)
    model_path = './checkpoints/FeatherNetB_bs32/_47_best.pth.tar'
    checkpoint = torch.load(model_path,map_location = 'cpu')
    print('load model:',model_path)
    model_dict = {}
    state_dict = model.state_dict()
    #print(checkpoint)
    for (k,v) in checkpoint['state_dict'].items():
        print(k)
        if k[7:] in state_dict:
            model_dict[k[7:]] = v
    state_dict.update(model_dict)
    model.load_state_dict(state_dict)
    #model.load_state_dict(checkpoint['state_dict'])
    model.eval()
    transform = transforms.Compose([
        transforms.Resize(int(256 * ratio)),
        transforms.CenterCrop(img_size),
        transforms.ToTensor(),
        normalize,
    ])

    check_spoofing('./images/fake.jpg')
    check_spoofing('./images/real.jpg')

thank you

sunjunlishi commented 4 years ago

@Epimenides7 你说的真对

piyushlife commented 4 years ago

Its amazing its working! thanks i use these code: return 1 to fake photo and 0 in real face [face cropped required]

the next code print output:

./images/fake.jpg: FAKE!
./images/real.jpg: REAL

PDTA: I could export to tensorflow.js, some recomendation? thanks

import sys
sys.path.insert(0,'.')
sys.path.insert(0,'..')
import torch
import tensorflow as tf
from pytorch2keras.converter import pytorch_to_keras
from tensorflow.keras.models import Model
from models import FeatherNet
from torchsummary import summary
from PIL import Image
import torchvision.transforms as transforms

def check_spoofing(image_path):
    image1 = Image.open(image_path)
    image1 = transform(image1).unsqueeze(0)
    output1 = model(image1)
    #print(output1)
    soft_output = torch.softmax(output1, dim=-1)
    #print(soft_output,soft_output.data)
    preds = soft_output.to('cpu').detach().numpy()
    #print(preds)
    _, predicted = torch.max(soft_output.data, 1)
    predicted = predicted.to('cpu').detach().numpy()
    if(predicted[0]==1):
        print(image_path+": FAKE!")
    else:
        print(image_path + ": REAL")

if __name__=='__main__':
    input_size=224
    img_size=224
    ratio = 224.0 / float(img_size)
    global normalize, transform
     # Data loading code
    normalize = transforms.Normalize(mean=[0.14300402, 0.1434545, 0.14277956],  ##accorcoding to casia-surf val to commpute
                                     std=[0.10050353, 0.100842826, 0.10034215])
    name = 'FeatherNetB'
    model = FeatherNet(se = True,avgdown=True)
    #print(model)
    model_path = './checkpoints/FeatherNetB_bs32/_47_best.pth.tar'
    checkpoint = torch.load(model_path,map_location = 'cpu')
    print('load model:',model_path)
    model_dict = {}
    state_dict = model.state_dict()
    #print(checkpoint)
    for (k,v) in checkpoint['state_dict'].items():
        print(k)
        if k[7:] in state_dict:
            model_dict[k[7:]] = v
    state_dict.update(model_dict)
    model.load_state_dict(state_dict)
    #model.load_state_dict(checkpoint['state_dict'])
    model.eval()
    transform = transforms.Compose([
        transforms.Resize(int(256 * ratio)),
        transforms.CenterCrop(img_size),
        transforms.ToTensor(),
        normalize,
    ])

    check_spoofing('./images/fake.jpg')
    check_spoofing('./images/real.jpg')

I think Real is "1" while Fake is "0". If you check txt files present in data folder, you can conclude the actual label value.

FatemeGhasemi commented 3 years ago

I dont know what this line "./checkpoints/FeatherNetB_bs32/_47_best.pth.tar" refrence to? it has just a readme and when I download links in readme one of them is not tar and it is dmg (I dont know even the link language) and the other one is not usable

yangjian1218 commented 2 years ago

训练数据的人脸都是扣出来的, 那实际应用的时候不需要把人脸扣出来么?

yangjian1218 commented 2 years ago

谢谢您!但是我把real和fake都试了几张,输出的都试[0]。 image1 = Image.open('./image/f01.jpg') image1 = transform(image1).unsqueeze(0) output1 = net(image1) soft_output = torch.softmax(output1, dim=-1) preds = softoutput.to('cpu').detach().numpy() , predicted = torch.max(soft_output.data, 1) predicted = predicted.to('cpu').detach().numpy() print(predicted)

参考楼下的,楼下测试成功了,可能你忘记切换到eval模式了.

作者,您好, 按照这个代码,的确可以输出预测,可是为啥是[1,1024]维度呢, 不是二分类么?另外我看FeatherNet的定义,n_class=2并没有用上.