Open piovis2023 opened 1 month ago
@piovis2023 Please waiting, in many near day , I have problem with my new research paper about stable diffusion task so i don't have many time for it , so stay tuned , I'll update big features in the next few day soon . Thank you for enjoying it
You EXCELLENT person.
OK looking forward to it. Good luck with your research paper. Let me know if I can help you with it at all!
@piovis2023 I'm just see ComfyUI workflow , so I think I'll build this project as a package python and you can use it to paste into ComfyUI to run. Will have details later
Sounds great. Im excited to test and provide quick feedback and suggestions to help you perfect it!
@piovis2023 in this tonight , I'll update tensorrt on it , and tomorrow will update new feature that integrate with SadTalker and Anything-anypose model, maybe next week will integrate with controlnet-open-pose and animate-diff-motion from ByteDance
Sounds good. I'm playing around with incorporating live portrait with mimic motion at the moment.
I wonder if MusePose would be better. MusePose is just a BIG headache to install on a Windows OS due to its sensitive dependencies.
I heard TensorRT has lots of complications with diffusion models so I uninstalled it.
I encourage you to release a workflow version without TensorRT first. I can experiment and test it for you while you try to add TensorRT if you like.
If you are successful at this, I'll give you some really good ideas on how to make your workflow game-changing!
@piovis2023 TensorRT is very Faster, so wait for update latest, I'm also update for onnxruntime and tensorrt use can use both of it
@piovis2023 Also help me to clean or refactor some case in code, I'm tired for coding everyday for many project
Ah - I'm not a coder BUT I'm happy to help you find some people that might be able to do this.
Would that be helpful for you?
@piovis2023 TensorRT is worked, I'm testing on colab , so I will push it at 9:00 , you can use my colab to test it, very faster
Oh! Well done!
OK I can try Collab for you. I really need comfyui though.
@piovis2023 New version is availabled , support for lastest cuda and tensorrt in colab , Please try it out in the colab folder notebook
This is great, I've just run colab. Where would I find the output please?
I've also seen this error, can you help:
Downloaded successfully and already saved
Downloaded successfully and already saved
[07/21/2024-06:50:47] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[07/21/2024-06:50:47] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[07/21/2024-06:50:47] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[07/21/2024-06:50:47] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[07/21/2024-06:50:47] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
2024-07-21 06:50:49.554036004 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,10} does not match actual shape of {512,10} for output 500
2024-07-21 06:50:49.554200532 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,4} does not match actual shape of {512,4} for output 497
2024-07-21 06:50:49.554323000 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,1} does not match actual shape of {512,1} for output 494
2024-07-21 06:50:49.559504514 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,10} does not match actual shape of {2048,10} for output 477
2024-07-21 06:50:49.559898632 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,4} does not match actual shape of {2048,4} for output 474
2024-07-21 06:50:49.560249341 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of
2024-07-21 06:50:49.578026405 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,1} does not match actual shape of {2048,1} for output 471
2024-07-21 06:50:49.578026405 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,10} does not match actual shape of {8192,10} for output 454
2024-07-21 06:50:49.579605106 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,4} does not match actual shape of {8192,4} for output 451
2024-07-21 06:50:49.580930056 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,1} does not match actual shape of {8192,1} for output 448
2024-07-21 06:50:49.921668614 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,10} does not match actual shape of {512,10} for output 500
2024-07-21 06:50:49.921826137 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,4} does not match actual shape of {512,4} for output 497
2024-07-21 06:50:49.921943157 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {800,1} does not match actual shape of {512,1} for output 494
2024-07-21 06:50:49.927078373 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,10} does not match actual shape of {2048,10} for output 477
2024-07-21 06:50:49.927544874 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,4} does not match actual shape of {2048,4} for output 474
2024-07-21 06:50:49.927919391 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {3200,1} does not match actual shape of {2048,1} for output 471
2024-07-21 06:50:49.945395555 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,10} does not match actual shape of {8192,10} for output 454
2024-07-21 06:50:49.947056291 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,4} does not match actual shape of {8192,4} for output 451
2024-07-21 06:50:49.948361812 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {12800,1} does not match actual shape of {8192,1} for output 448
Traceback (most recent call last):
File "/content/Efficient-Live-Portrait/run_live_portrait.py", line 27, in
ffmpeg version 4.2.2-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers built with gcc 8 (Debian 8.3.0-6) configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 [mov,mp4,m4a,3gp,3g2,mj2 @ 0x6670580] Format mov,mp4,m4a,3gp,3g2,mj2 detected only with low score of 1, misdetection possible! [mov,mp4,m4a,3gp,3g2,mj2 @ 0x6670580] moov atom not found /content/Efficient-Live-Portrait/experiment_examples/examples/driving/Lipbite_V2.mp4: Invalid data found when processing input
which command line you use to run show me it
@piovis2023 that's mean your input video can't open [mov,mp4,m4a,3gp,3g2,mj2 @ 0x6670580] moov atom not found /content/Efficient-Live-Portrait/experiment_examples/examples/driving/Lipbite_V2.mp4: Invalid data found when processing input
@piovis2023 please make sure your video has end file .mp4 and can open in your computer and then upload on drive and change the path
Cam on @aihacker111 (just realised you are from Vietnam - one of my favourite countries in the world!!!)
Good news = The colab script ran
Bad news = Results were scary!
Our tests work pretty well on the other LivePortrait solutions. Let me know how I can help
Thanks.
@piovis2023 You can try the better result by turn off fp16 using original fp32 in the command line remove -fp16
@aihacker111 YES!!!
Much much better.
Only a little bit of warping.
Areas of attention (bonus):
We are going to try another test video with the following
I will be interesting to explore the real limitations of your good work!
When could you get this into comfyui? I would really appreciate that!
@piovis2023 Thank you, my friend, I'm take time to do new paper with latest update in tonight, stay tuned
@aihacker111 the 'animations' folder does not appear with the new animations anymore when the script is run. Has anything been updated/changed? It was working before.
@aihacker111 if you run on colab , refresh button in the left will be show it
Yeah @aihacker111 I'm having the same issue. Animations folder stopped appearing. I closed my computer, reloaded the same file and the animation folder didn't come up this time?
Did something change my friend?
Do you run on colab or computer
On colab, I refreshed and the folder did not appear unfortunately.
Maybe it is the issues with colab, im just test again , everything ok
Thanks. Will check again.
@aihacker111 had the same issue. It didn't produce the 'animations' folder.
@aihacker111 here is what's at the bottom once the driver and source image are run:
File "/content/Efficient-Live-Portrait/run_live_portrait.py", line 27, in
@aihacker111 I hope you did well on your project paper.
Did you make any progress on this? Or comfyui? Thanks
Hi @aihacker111 I've just back and tried your updated solution. Great you got it working. The quality isn't as good as the first time that I tried it (without fp-16).
Did you change something? How can I use the older version again?
Keep up the great work!
Do you have any comfyui workflow examples?
There are quite a few variations of Live Portrait for ComfyUI.
This version seem the most promising. I'd like to see an example so I can learn it.
Thank you