Open 55renchen opened 1 week ago
In the meantime, you can manually adjust the composition of the input images or videos to highlight the main subject, making it easier for the detection algorithm to capture essential information without omission. I'm currently working on optimizing VREM usage, and these issues will be fully resolved in future updates. Please stay tuned.
when i run python inference_video.py, an error occurred:
Traceback (most recent call last): File "/root/HelloMeme/inference_video.py", line 117, in
inference_video(engines, ref_img_path, drive_video_path, save_path, trans_ratio=0.0)
File "/root/miniconda3/envs/hellomeme/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/root/HelloMeme/inference_video.py", line 50, in inference_video
(drive_face_parts, drive_coeff, drive_rot, drive_trans) = get_drive_params(engines['face_aligner'],
File "/root/HelloMeme/hellomeme/utils.py", line 234, in get_drive_params
frame_list, landmark_list = det_landmarks(face_aligner, frame_list, save_size=(512, 512), reset=False)
File "/root/HelloMeme/hellomeme/utils.py", line 220, in det_landmarks
save_landmark_list = np.stack(save_landmark_list, axis=0).astype(np.float16)
File "/root/miniconda3/envs/hellomeme/lib/python3.10/site-packages/numpy/_core/shape_base.py", line 453, in stack
raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack
how can i solve it?