KwaiVGI / LivePortrait

Bring portraits to life!
https://liveportrait.github.io
Other
11.41k stars 1.16k forks source link

TypeError: 'NoneType' object is not subscriptable #181

Closed mittimi closed 1 month ago

mittimi commented 1 month ago

Hi. image2video is working fine. However, when I run video2video, I get the following error. The execution environment is running venv in Paperspace.

πŸš€Animating... ━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━ 44% 0:00:04 Traceback (most recent call last): File "/tmp/LivePortrait/inference.py", line 57, in main() File "/tmp/LivePortrait/inference.py", line 53, in main live_portrait_pipeline.execute(args) File "/tmp/LivePortrait/src/live_portrait_pipeline.py", line 252, in execute lip_delta_before_animation = self.live_portrait_wrapper.retarget_lip(x_s, combined_lip_ratio_tensor_before_animation) File "/tmp/LivePortrait/src/live_portrait_wrapper.py", line 230, in retarget_lip delta = self.stitching_retargeting_module'lip' TypeError: 'NoneType' object is not subscriptable

cleardusk commented 1 month ago

Thanks for your feedback. @mittimi

Could you please provide the inputs for us to debug? Do the default cases cause errors?

mittimi commented 1 month ago

Thanks. @cleardusk

Sorry, I am not familiar with error reporting and may be providing incorrect information.

The input at runtime is

!/tmp/venvlp/bin/python3 inference.py -s animations/s13.mp4 -d animations/d0.mp4

The output is as follows.

[06:28:00] Load appearance_feature_extractor done.   ]8;id=391894;file:///tmp/LivePortrait/src/live_portrait_wrapper.py\live_portrait_wrapper.py]8;;\:]8;id=875734;file:///tmp/LivePortrait/src/live_portrait_wrapper.py#40\40]8;;\
           Load motion_extractor done.               ]8;id=811104;file:///tmp/LivePortrait/src/live_portrait_wrapper.py\live_portrait_wrapper.py]8;;\:]8;id=950141;file:///tmp/LivePortrait/src/live_portrait_wrapper.py#43\43]8;;\
[06:28:01] Load warping_module done.                 ]8;id=773182;file:///tmp/LivePortrait/src/live_portrait_wrapper.py\live_portrait_wrapper.py]8;;\:]8;id=108762;file:///tmp/LivePortrait/src/live_portrait_wrapper.py#46\46]8;;\
           Load spade_generator done.                ]8;id=652615;file:///tmp/LivePortrait/src/live_portrait_wrapper.py\live_portrait_wrapper.py]8;;\:]8;id=615265;file:///tmp/LivePortrait/src/live_portrait_wrapper.py#49\49]8;;\
[06:28:02] LandmarkRunner warmup time: 0.693s              ]8;id=7825;file:///tmp/LivePortrait/src/utils/landmark_runner.py\landmark_runner.py]8;;\:]8;id=631801;file:///tmp/LivePortrait/src/utils/landmark_runner.py#95\95]8;;\
[06:28:03] FaceAnalysisDIY warmup time: 0.861s           ]8;id=729651;file:///tmp/LivePortrait/src/utils/face_analysis_diy.py\face_analysis_diy.py]8;;\:]8;id=773833;file:///tmp/LivePortrait/src/utils/face_analysis_diy.py#79\79]8;;\
[06:28:04] Load source video from                   ]8;id=499217;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=120367;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#97\97]8;;\
           animations/s13.mp4, FPS is 25                                        
           Load driving video from:                ]8;id=74966;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=654728;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#128\128]8;;\
           animations/d0.mp4, FPS is 25                                         
[06:28:05] Start making driving motion template... ]8;id=540300;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=797399;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#134\134]8;;\
Making motion templates... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00m 0:00:01
[06:28:06] Dump motion template to                 ]8;id=363584;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=784051;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#159\159]8;;\
           animations/d0.pkl                                                    
           Prepared pasteback mask done.           ]8;id=573198;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=833944;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#168\168]8;;\
           Start making source motion template...  ]8;id=870785;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=714186;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#178\178]8;;\
[06:28:07] Source video is cropped, 78 frames are  ]8;id=79237;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=305605;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#183\183]8;;\
           processed.                                                           
Making motion templates... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:0000:0100:01
[06:28:08] The animated video consists of 78       ]8;id=489193;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=229503;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#231\231]8;;\
           frames.                                                              
πŸš€Animating... ━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━  44% 0:00:04
Traceback (most recent call last):
  File "/tmp/LivePortrait/inference.py", line 57, in <module>
    main()
  File "/tmp/LivePortrait/inference.py", line 53, in main
    live_portrait_pipeline.execute(args)
  File "/tmp/LivePortrait/src/live_portrait_pipeline.py", line 252, in execute
    lip_delta_before_animation = self.live_portrait_wrapper.retarget_lip(x_s, combined_lip_ratio_tensor_before_animation)
  File "/tmp/LivePortrait/src/live_portrait_wrapper.py", line 230, in retarget_lip
    delta = self.stitching_retargeting_module['lip'](feat_lip)
TypeError: 'NoneType' object is not subscriptable

By the way, the image works fine as shown in the following log.

!/tmp/venvlp/bin/python3 inference.py -s animations/face5.jpg -d animations/d0.mp4
[06:59:32] Load appearance_feature_extractor done.   ]8;id=313373;file:///tmp/LivePortrait/src/live_portrait_wrapper.py\live_portrait_wrapper.py]8;;\:]8;id=418666;file:///tmp/LivePortrait/src/live_portrait_wrapper.py#40\40]8;;\
           Load motion_extractor done.               ]8;id=475813;file:///tmp/LivePortrait/src/live_portrait_wrapper.py\live_portrait_wrapper.py]8;;\:]8;id=778231;file:///tmp/LivePortrait/src/live_portrait_wrapper.py#43\43]8;;\
[06:59:33] Load warping_module done.                 ]8;id=961738;file:///tmp/LivePortrait/src/live_portrait_wrapper.py\live_portrait_wrapper.py]8;;\:]8;id=699946;file:///tmp/LivePortrait/src/live_portrait_wrapper.py#46\46]8;;\
           Load spade_generator done.                ]8;id=946051;file:///tmp/LivePortrait/src/live_portrait_wrapper.py\live_portrait_wrapper.py]8;;\:]8;id=564916;file:///tmp/LivePortrait/src/live_portrait_wrapper.py#49\49]8;;\
[06:59:34] LandmarkRunner warmup time: 0.700s              ]8;id=825896;file:///tmp/LivePortrait/src/utils/landmark_runner.py\landmark_runner.py]8;;\:]8;id=639310;file:///tmp/LivePortrait/src/utils/landmark_runner.py#95\95]8;;\
[06:59:35] FaceAnalysisDIY warmup time: 0.862s           ]8;id=928528;file:///tmp/LivePortrait/src/utils/face_analysis_diy.py\face_analysis_diy.py]8;;\:]8;id=41469;file:///tmp/LivePortrait/src/utils/face_analysis_diy.py#79\79]8;;\
           Load source image from                   ]8;id=842372;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=193512;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#90\90]8;;\
           animations/face5.jpg                                                 
           Load driving video from:                ]8;id=229054;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=930246;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#128\128]8;;\
           animations/d0.mp4, FPS is 25                                         
[06:59:36] Start making driving motion template... ]8;id=508215;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=837978;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#134\134]8;;\
Making motion templates... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00m 0:00:01
[06:59:37] Dump motion template to                 ]8;id=136331;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=422123;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#159\159]8;;\
           animations/d0.pkl                                                    
           Prepared pasteback mask done.           ]8;id=540326;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=80098;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#168\168]8;;\
[06:59:38] The animated video consists of 78       ]8;id=728230;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=472390;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#231\231]8;;\
           frames.                                                              
πŸš€Animating... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:06m 0:00:01
Concatenating result... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00
Writing ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:01m 0:00:01
Writing ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0% -:--:--[swscaler @ 0x66dbe80] Warning: data is not aligned! This can lead to a speed loss
Writing ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:01m 0:00:01
[06:59:48] Animated template: animations/d0.pkl,   ]8;id=584962;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=266948;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#396\396]8;;\
           you can specify `-d` argument with this                              
           template path next time to avoid                                     
           cropping video, motion making and                                    
           protecting privacy.                                                  
           Animated video:                         ]8;id=596075;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=32900;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#397\397]8;;\
           animations/face5--d0.mp4                                             
           Animated video with concat:             ]8;id=934287;file:///tmp/LivePortrait/src/live_portrait_pipeline.py\live_portrait_pipeline.py]8;;\:]8;id=247980;file:///tmp/LivePortrait/src/live_portrait_pipeline.py#398\398]8;;\
           animations/face5--d0_concat.mp4   
zzzweakman commented 1 month ago

Hi @mittimi, could you please verify if the self.stitching_retargeting_module in live_portrait_wrapper.py is loaded correctly? Additionally, does the size of your stitching_retargeting_module.pth match the size indicated in the table in the README?

mittimi commented 1 month ago

Mr. @zzzweakman ! Thanks to you I realized my elementary mistake.

The hierarchy of the retargeting_models folder containing the stitching_retargeting_module.pth file was misaligned. The cause is that I couldn't get the lfs to install properly in Paperspace and I had to build the folders manually.

Thank you so much.