radekd91 / inferno

πŸ”₯πŸ”₯πŸ”₯ Set the world of 3D faces on fire with INFERNO πŸ”₯πŸ”₯πŸ”₯
Other
177 stars 21 forks source link

Problems encountered when running FaceReconstruction/demo/demo_face_rec_on_video.py #29

Open jing54001 opened 1 month ago

jing54001 commented 1 month ago

First, I encountered this problem when I first ran it.

betas = torch.cat([shape_params, expression_params], dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 1 but got size 4 for tensor number 1 in the list.

I found that after face_encode, the shape of batch['shapecode'] is (1, 300) but the default batch_size of the code is 4, so I added the following modification

batch = self.face_encoder(batch, return_features=return_features)
batch['shape code]=batch['shapecode'].expand(4, -1)
check_nan(batch)

After that, my code can run normally, but after running part of the results, it suddenly reports an error. How can I solve it? (When executing insightface's download.py, the antelopev2 download failed because the old version link is invalid. I used the following link to download https://huggingface.co/MonsterMMORPG/tools/resolve/main/antelopev2.zipοΌ‰


[WARNING] Processing MICA image in forward pass. This is very inefficient for training. Please precompute the MICA images in the data loader.
 17%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž                                                                                                                              | 34/204 [00:06<00:21,  7.86it/s][WARNING] Processing MICA image in forward pass. This is very inefficient for training. Please precompute the MICA images in the data loader.
 17%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                                                                                                              | 35/204 [00:06<00:21,  7.93it/s][WARNING] Processing MICA image in forward pass. This is very inefficient for training. Please precompute the MICA images in the data loader.
 17%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ                                                                                                                              | 35/204 [00:06<00:33,  5.04it/s]
Traceback (most recent call last):
  File "demo_face_rec_on_video.py", line 174, in <module>
    main()
  File "demo_face_rec_on_video.py", line 169, in main
    reconstruct_video(args)
  File "demo_face_rec_on_video.py", line 85, in reconstruct_video
    vals = test(model, img)
  File "/data2/jyy/inferno/inferno_apps/FaceReconstruction/utils/test.py", line 7, in test
    values = model(batch, training=False, validation=False)
  File "/data2/jyy/miniconda3/envs/aios/envs/face/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data2/jyy/inferno/inferno/models/FaceReconstruction/FaceRecBase.py", line 357, in forward
    batch = self.encode(batch, training=training)
  File "/data2/jyy/inferno/inferno/models/FaceReconstruction/FaceRecBase.py", line 522, in encode
    batch = self.face_encoder(batch, return_features=return_features)
  File "/data2/jyy/miniconda3/envs/aios/envs/face/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data2/jyy/inferno/inferno/models/FaceReconstruction/FaceEncoder.py", line 18, in forward
    return self.encode(batch, return_features=return_features)
  File "/data2/jyy/inferno/inferno/models/FaceReconstruction/FaceEncoder.py", line 270, in encode
    batch = self.mica_deca_encoder.encode(batch, return_features=return_features)
  File "/data2/jyy/inferno/inferno/models/FaceReconstruction/FaceEncoder.py", line 211, in encode
    batch = self.mica_encoder.encode(batch, return_features=return_features)
  File "/data2/jyy/inferno/inferno/models/FaceReconstruction/FaceEncoder.py", line 173, in encode
    mica_image = self.mica_preprocessor(image, fan_landmarks, landmarks_validity=landmarks_validity)
  File "/data2/jyy/inferno/inferno/models/mica/MicaInputProcessing.py", line 58, in __call__
    mica_image = self._dirty_image_preprocessing(input_image)
  File "/data2/jyy/inferno/inferno/models/mica/MicaInputProcessing.py", line 179, in _dirty_image_preprocessing
    blob, _ = get_arcface_input(face, img)
  File "/data2/jyy/inferno/inferno/models/mica/detector.py", line 39, in get_arcface_input
    aimg = face_align.norm_crop(img, landmark=face.kps)
  File "/data2/jyy/miniconda3/envs/aios/envs/face/lib/python3.8/site-packages/insightface/utils/face_align.py", line 72, in norm_crop
    warped = cv2.warpAffine(img, M, (image_size, image_size), borderValue=0.0)
TypeError: Expected Ptr<cv::UMat> for argument 'M'