aihacker111 / Efficient-Live-Portrait

Fast running Live Portrait with TensorRT and ONNX models
MIT License
122 stars 10 forks source link

ComfyUI workflow examples #10

Open piovis2023 opened 1 month ago

piovis2023 commented 1 month ago

Do you have any comfyui workflow examples?

There are quite a few variations of Live Portrait for ComfyUI.

This version seem the most promising. I'd like to see an example so I can learn it.

Thank you

aihacker111 commented 1 month ago

@piovis2023 Please waiting, in many near day , I have problem with my new research paper about stable diffusion task so i don't have many time for it , so stay tuned , I'll update big features in the next few day soon . Thank you for enjoying it

piovis2023 commented 1 month ago

You EXCELLENT person.

OK looking forward to it. Good luck with your research paper. Let me know if I can help you with it at all!

aihacker111 commented 1 month ago

@piovis2023 I'm just see ComfyUI workflow , so I think I'll build this project as a package python and you can use it to paste into ComfyUI to run. Will have details later

piovis2023 commented 1 month ago

Sounds great. Im excited to test and provide quick feedback and suggestions to help you perfect it!

aihacker111 commented 1 month ago

@piovis2023 in this tonight , I'll update tensorrt on it , and tomorrow will update new feature that integrate with SadTalker and Anything-anypose model, maybe next week will integrate with controlnet-open-pose and animate-diff-motion from ByteDance

piovis2023 commented 1 month ago

Sounds good. I'm playing around with incorporating live portrait with mimic motion at the moment.

I wonder if MusePose would be better. MusePose is just a BIG headache to install on a Windows OS due to its sensitive dependencies.

I heard TensorRT has lots of complications with diffusion models so I uninstalled it.

I encourage you to release a workflow version without TensorRT first. I can experiment and test it for you while you try to add TensorRT if you like.

If you are successful at this, I'll give you some really good ideas on how to make your workflow game-changing!

aihacker111 commented 1 month ago

@piovis2023 TensorRT is very Faster, so wait for update latest, I'm also update for onnxruntime and tensorrt use can use both of it

aihacker111 commented 1 month ago

@piovis2023 Also help me to clean or refactor some case in code, I'm tired for coding everyday for many project

piovis2023 commented 1 month ago

Ah - I'm not a coder BUT I'm happy to help you find some people that might be able to do this.

Would that be helpful for you?

aihacker111 commented 1 month ago

@piovis2023 TensorRT is worked, I'm testing on colab , so I will push it at 9:00 , you can use my colab to test it, very faster

piovis2023 commented 1 month ago

Oh! Well done!

OK I can try Collab for you. I really need comfyui though.

aihacker111 commented 1 month ago

@piovis2023 New version is availabled , support for lastest cuda and tensorrt in colab , Please try it out in the colab folder notebook

PiovisTeam commented 1 month ago

This is great, I've just run colab. Where would I find the output please?

I've also seen this error, can you help:

Downloaded successfully and already saved Downloaded successfully and already saved [07/21/2024-06:50:47] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. [07/21/2024-06:50:47] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. [07/21/2024-06:50:47] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. [07/21/2024-06:50:47] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. [07/21/2024-06:50:47] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. 2024-07-21 06:50:49.554036004 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,10} does not match actual shape of {512,10} for output 500 2024-07-21 06:50:49.554200532 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,4} does not match actual shape of {512,4} for output 497 2024-07-21 06:50:49.554323000 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,1} does not match actual shape of {512,1} for output 494 2024-07-21 06:50:49.559504514 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,10} does not match actual shape of {2048,10} for output 477 2024-07-21 06:50:49.559898632 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,4} does not match actual shape of {2048,4} for output 474 2024-07-21 06:50:49.560249341 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of 2024-07-21 06:50:49.578026405 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,1} does not match actual shape of {2048,1} for output 471 2024-07-21 06:50:49.578026405 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,10} does not match actual shape of {8192,10} for output 454 2024-07-21 06:50:49.579605106 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,4} does not match actual shape of {8192,4} for output 451 2024-07-21 06:50:49.580930056 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,1} does not match actual shape of {8192,1} for output 448 2024-07-21 06:50:49.921668614 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,10} does not match actual shape of {512,10} for output 500 2024-07-21 06:50:49.921826137 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,4} does not match actual shape of {512,4} for output 497 2024-07-21 06:50:49.921943157 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,1} does not match actual shape of {512,1} for output 494 2024-07-21 06:50:49.927078373 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,10} does not match actual shape of {2048,10} for output 477 2024-07-21 06:50:49.927544874 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,4} does not match actual shape of {2048,4} for output 474 2024-07-21 06:50:49.927919391 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,1} does not match actual shape of {2048,1} for output 471 2024-07-21 06:50:49.945395555 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,10} does not match actual shape of {8192,10} for output 454 2024-07-21 06:50:49.947056291 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,4} does not match actual shape of {8192,4} for output 451 2024-07-21 06:50:49.948361812 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,1} does not match actual shape of {8192,1} for output 448 Traceback (most recent call last): File "/content/Efficient-Live-Portrait/run_live_portrait.py", line 27, in main(args.video, args.image, args.run_time, args.real_time, args.half_precision) File "/content/Efficient-Live-Portrait/run_live_portrait.py", line 14, in main live_portrait.render(live_portrait, video_path_or_id=video_path, image_path=source_img, real_time=real_time) File "/content/Efficient-Live-Portrait/LivePortrait/fast_live_portrait_pipeline.py", line 160, in render mask_ori, driving_rgb_lst, i_d_lsts, i_p_pastelst, , n_frames, input_eye_ratio_lsts, input_lip_ratio_lsts = live_portrait.process_source_motion( File "/content/Efficient-Live-Portrait/LivePortrait/live_portrait/portrait.py", line 52, in process_source_motion driving_rgb_lst = load_driving_info(source_motion) File "/content/Efficient-Live-Portrait/LivePortrait/commons/utils/utils.py", line 107, in load_driving_info driving_video_ori = load_images_from_video(driving_info) File "/content/Efficient-Live-Portrait/LivePortrait/commons/utils/utils.py", line 101, in load_images_from_video reader = imageio.get_reader(file_path) File "/usr/local/lib/python3.10/dist-packages/imageio/v2.py", line 293, in get_reader return image_file.legacy_get_reader(kwargs) File "/usr/local/lib/python3.10/dist-packages/imageio/core/legacy_plugin_wrapper.py", line 116, in legacy_get_reader return self._format.get_reader(self._request) File "/usr/local/lib/python3.10/dist-packages/imageio/core/format.py", line 221, in get_reader return self.Reader(self, request) File "/usr/local/lib/python3.10/dist-packages/imageio/core/format.py", line 312, in init self._open(self.request.kwargs.copy()) File "/usr/local/lib/python3.10/dist-packages/imageio/plugins/ffmpeg.py", line 343, in _open self._initialize() File "/usr/local/lib/python3.10/dist-packages/imageio/plugins/ffmpeg.py", line 494, in _initialize self._meta.update(self._read_gen.next()) File "/usr/local/lib/python3.10/dist-packages/imageio_ffmpeg/_io.py", line 297, in read_frames raise IOError(fmt.format(err2)) OSError: Could not load meta information === stderr ===

ffmpeg version 4.2.2-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers built with gcc 8 (Debian 8.3.0-6) configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 [mov,mp4,m4a,3gp,3g2,mj2 @ 0x6670580] Format mov,mp4,m4a,3gp,3g2,mj2 detected only with low score of 1, misdetection possible! [mov,mp4,m4a,3gp,3g2,mj2 @ 0x6670580] moov atom not found /content/Efficient-Live-Portrait/experiment_examples/examples/driving/Lipbite_V2.mp4: Invalid data found when processing input

aihacker111 commented 1 month ago

which command line you use to run show me it

aihacker111 commented 1 month ago

@piovis2023 that's mean your input video can't open [mov,mp4,m4a,3gp,3g2,mj2 @ 0x6670580] moov atom not found /content/Efficient-Live-Portrait/experiment_examples/examples/driving/Lipbite_V2.mp4: Invalid data found when processing input

aihacker111 commented 1 month ago

@piovis2023 please make sure your video has end file .mp4 and can open in your computer and then upload on drive and change the path

piovis2023 commented 1 month ago

Cam on @aihacker111 (just realised you are from Vietnam - one of my favourite countries in the world!!!)

Good news = The colab script ran

Bad news = Results were scary!

Our tests work pretty well on the other LivePortrait solutions. Let me know how I can help

image

Thanks.

aihacker111 commented 1 month ago

@piovis2023 You can try the better result by turn off fp16 using original fp32 in the command line remove -fp16

piovis2023 commented 1 month ago

@aihacker111 YES!!!

Much much better.

Only a little bit of warping.

Areas of attention (bonus):

We are going to try another test video with the following

I will be interesting to explore the real limitations of your good work!

When could you get this into comfyui? I would really appreciate that!

aihacker111 commented 1 month ago

@piovis2023 Thank you, my friend, I'm take time to do new paper with latest update in tonight, stay tuned

PiovisTeam commented 1 month ago

@aihacker111 the 'animations' folder does not appear with the new animations anymore when the script is run. Has anything been updated/changed? It was working before.

aihacker111 commented 1 month ago

@aihacker111 if you run on colab , refresh button in the left will be show it

piovis2023 commented 1 month ago

Yeah @aihacker111 I'm having the same issue. Animations folder stopped appearing. I closed my computer, reloaded the same file and the animation folder didn't come up this time?

Did something change my friend?

aihacker111 commented 1 month ago

Do you run on colab or computer

PiovisTeam commented 1 month ago

On colab, I refreshed and the folder did not appear unfortunately.

aihacker111 commented 1 month ago

Maybe it is the issues with colab, im just test again , everything ok

piovis2023 commented 1 month ago

Thanks. Will check again.

PiovisTeam commented 1 month ago

@aihacker111 had the same issue. It didn't produce the 'animations' folder.

PiovisTeam commented 1 month ago

@aihacker111 here is what's at the bottom once the driver and source image are run:

File "/content/Efficient-Live-Portrait/run_live_portrait.py", line 27, in main(args.video, args.image, args.run_time, args.real_time, args.half_precision) File "/content/Efficient-Live-Portrait/run_live_portrait.py", line 13, in main live_portrait = EfficientLivePortrait(use_tensorrt, half_precision, kwargs) File "/content/Efficient-Live-Portrait/LivePortrait/fast_live_portrait_pipeline.py", line 12, in init super().init(use_tensorrt, half, kwargs) File "/content/Efficient-Live-Portrait/LivePortrait/live_portrait/portrait.py", line 12, in init self.predictor = EfficientLivePortraitPredictor(use_tensorrt, half, *kwargs) File "/content/Efficient-Live-Portrait/LivePortrait/commons/predictor.py", line 12, in init from .utils.tensorrt_driver import TensorRTEngine File "/content/Efficient-Live-Portrait/LivePortrait/commons/utils/tensorrt_driver.py", line 2, in import pycuda.driver as cuda File "/usr/local/lib/python3.10/dist-packages/pycuda/driver.py", line 66, in from pycuda._driver import # noqa ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

piovis2023 commented 1 month ago

@aihacker111 I hope you did well on your project paper.

Did you make any progress on this? Or comfyui? Thanks

piovis2023 commented 1 month ago

Hi @aihacker111 I've just back and tried your updated solution. Great you got it working. The quality isn't as good as the first time that I tried it (without fp-16).

Did you change something? How can I use the older version again?

Keep up the great work!