MarekKowalski / FaceSwap

3D face swapping implemented in Python
MIT License
732 stars 206 forks source link

Face texture. #19

Open SurrealNautilus opened 5 years ago

SurrealNautilus commented 5 years ago

Hi! I'm very surprised by the quality of your work! I wonder if it is possible to put a more abstract texture on the face ?. It's possible?.

MarekKowalski commented 5 years ago

Hi, In order to map a different texture you will need to provide the texture coordinates to textureCoords in line 41 in zad2.py file. Altenatively, if the texture you want to use resembles a face, but is too hard for the landamark detector to work you can modify the getTextureCoords function to work with manually specified facial landmarks.

Marek

wangbo9426 commented 4 years ago

Hi,Can you explain the function getTextureCoords?

MarekKowalski commented 4 years ago

Hi, given an imagee, this function detects the face and its landmarks. It then fits the 3D model to the landmarks and outputs the image coordinates to which the vertices of the 3D model project to. Marek

wangbo9426 commented 4 years ago

Hi ,Thanks for your response!Actually I don't know the code details.Could you note with english?hhhhh

 def fun(self,x,params):
    #skalowanie
    s = params [0]
    #rotacja
    r = params [1:4]
    #przesuniecie(translacja)
    t = params [4:6]
    w = params [6: ]
    mean3DShape = x[0]
    blendshapes = x[1]
    #macierz rotacji z wektora rotacji, wzor Rodriguesa
    R = cv2.Rodrigues(r)[0]
    P = R[:2]
    shape3D = mean3DShape + np.sum(w[:, np.newaxis, np.newaxis] * blendshapes, axis=0)

    projected = s * np.dot(P, shape3D) + t[:, np.newaxis]

    return projected
MarekKowalski commented 4 years ago

Her eit is:

 def fun(self,x,params):
    #scale
    s = params [0]
    #rotation
    r = params [1:4]
    #translation
    t = params [4:6]
    w = params [6: ]
    mean3DShape = x[0]
    blendshapes = x[1]
    # going from rotation vector to rotation matrix, Rodrigues formula
    R = cv2.Rodrigues(r)[0]
    P = R[:2]
    # 3D shape is sum of mean shape and weighted sum of blendshapes
    shape3D = mean3DShape + np.sum(w[:, np.newaxis, np.newaxis] * blendshapes, axis=0)
    # Orthographic projection
    projected = s * np.dot(P, shape3D) + t[:, np.newaxis]

    return projected