Closed guotianli closed 12 months ago
During the data preprocessing phase, we align faces based on the landmark position stored in the JSON file. You can use the detector to detect landmarks to generate such files. For more details please refer to the Implement Details in our paper and the code as follows:
lib/extract_frames_ldm_ff++.py
faces = face_detector(frame, 1)
if len(faces) == 0:
tqdm.write('No faces in {}:{}'.format(cnt_frame, os.path.basename(video_path)))
continue
landmarks = list() # save the landmark
size_list = list() # save the size of the detected face
for face_idx in range(len(faces)):
landmark = face_predictor(frame, faces[face_idx])
landmark = face_utils.shape_to_np(landmark)
x0, y0 = landmark[:, 0].min(), landmark[:, 1].min()
x1, y1 = landmark[:, 0].max(), landmark[:, 1].max()
face_s = (x1 - x0) * (y1 - y0)
size_list.append(face_s)
landmarks.append(landmark)
# save the landmark with the biggest face
landmarks = np.concatenate(landmarks).reshape((len(size_list),)+landmark.shape)
landmarks = landmarks[np.argsort(np.array(size_list))[::-1]][0]
I want to use this network to detect fake image data sets instead of video data sets. How do I modify the code? Also, with a new dataset, how do I generate a json file for the dataset?
I would appreciate your advice from the experts