aihacker111 / Efficient-Live-Portrait

Fast running Live Portrait with TensorRT and ONNX models
MIT License
136 stars 12 forks source link

does it can run on intel cpu or gpu? #17

Open ziyanxzy opened 3 months ago

ziyanxzy commented 3 months ago

hi , i see you can run on mac m1 cpu, but it can run on intel cpu or not?

aihacker111 commented 3 months ago

All CPU, onnx model can run , TensorRT model need GPU @ziyanxzy

ziyanxzy commented 3 months ago

intel arc770 etc can we use intel gpu?

aihacker111 commented 3 months ago

@ziyanxzy you can test and if have any bug , please let me know

ziyanxzy commented 3 months ago

how can i set to run on cpu?

ziyanxzy commented 3 months ago

because i find it always run on tensorrt instead of cpu

aihacker111 commented 3 months ago

@ziyanxzy you have to remove flag --run_time, it'll run by onnx model

ziyanxzy commented 3 months ago

image i do that but still can not run on cpu

aihacker111 commented 3 months ago

This is run with onnxruntime-gpu, if you want to cpu only, you have to install requirements-cpu.txt

ziyanxzy commented 3 months ago

yes i install requirement-cpu,but onnxruntime-gpu in the requirements image

aihacker111 commented 3 months ago

@ziyanxzy so sorry, just remove it and replace onnxruntime only , I'll update later

ziyanxzy commented 3 months ago

if i uninstall onnxruntime, so i guess it can not run on cpu without any code change? image

aihacker111 commented 3 months ago

@ziyanxzy onnxruntime is for CPU and onnxruntime-gpu is for GPU You need to pip uninstall onnxruntime-gpu and install again by pip install onnxruntime (CPU only)

ziyanxzy commented 3 months ago

oh thank you ~, but where is the output video? i can not found the output..

aihacker111 commented 3 months ago

@ziyanxzy it's in animations folder , it's auto create it

ziyanxzy commented 3 months ago

yes, i use this cmd, but can not create a animations folder. python run_live_portrait.py --driving_video "/home/sas/Desktop/valencia/Efficient-Live-Portrait/experiment_examples/examples/driving/d0.mp4" --source_image "/home/sas/Desktop/valencia/Efficient-Live-Portrait/experiment_examples/examples/examples/source/s9.jpg" --task ['video']

aihacker111 commented 3 months ago

@ziyanxzy Have you check in the source code, it auto generate in this, also I'm update new feature and fixing the requirements.txt file, you can git clone again

aihacker111 commented 3 months ago

@ziyanxzy Our not have issues with it , please capture your tree project for me

ziyanxzy commented 3 months ago

image still can not generate i use example driving video and image

aihacker111 commented 3 months ago

@ziyanxzy Please capture your inside Efficient-Live-Portrait folder, if not you need to check here for make sure the path is where File code: fast_live_portrait_pipeline.py

Screenshot 2024-07-30 at 20 51 50

If you run it succesfully, it'll print you the path to save output

aihacker111 commented 3 months ago
Screenshot 2024-07-30 at 20 53 12 Screenshot 2024-07-30 at 20 53 44

I'm running on colab or local not have issues

ziyanxzy commented 3 months ago

image

aihacker111 commented 3 months ago

show me the log when finished running

aihacker111 commented 3 months ago

@ziyanxzy --task not is a list, it's a option image, video or webcam , --task image not --task ['image']

ziyanxzy commented 3 months ago

ok i see. i see the code and fint out the task is not a list but a str. now it succeed!!!thank you very much. but i think it will be a little easy to confuse, because your example is a list. ^^

aihacker111 commented 3 months ago

@ziyanxzy Haha , I'll fixed it in readme , so sorry my friend

aihacker111 commented 3 months ago

@ziyanxzy If you run with tensorrt, please make sure your latest ubuntu and package, and it cannot run on window with TensorRT . Noted this my friend

ziyanxzy commented 3 months ago

it doesnt matter~would be run on windows in the future? , and i also want to try run on igpu (intel), and deployment

aihacker111 commented 3 months ago

@ziyanxzy If you have window os, Please help me build plugin for TensorRT again on window and pull request . Thank you

aihacker111 commented 3 months ago

@ziyanxzy BTW, I don't like window more haha 😂 🤣

ziyanxzy commented 3 months ago

if i use tensorrt it will run on cuda, but i want to run on windowscpu or igpu in my pc. i use cpu to run i think it takes not a long time. haha. i also didnt like windows!

aihacker111 commented 3 months ago

@ziyanxzy Cool !, you can send me the validation speed run time of your computer when finished ? . I want to adding the validation board in README for clearly about speed

ziyanxzy commented 3 months ago

let me try on windows cpu, i think if we want to run on igpu on windows, it may need to convert model to openvino. i will try windows cpu first and give you speed runtime. ^^ thank you . btw , why the onnx run on cpu much faster than python run on cpu. i try to run pytorch model on cpu, but it takes a long time!!!

aihacker111 commented 3 months ago

@ziyanxzy Cool, I don't have plan to convert to OpenVINO because I'm using more on MacOS, but if you converted successfully, can you pull request on my code to updated feature for it ?

aihacker111 commented 3 months ago

@ziyanxzy because onnx is built to run on cpu with back-end api is C++ , and pytorch is not , it’ll take all core of your CPU to run, onnx just take half core

ziyanxzy commented 3 months ago

i think the inference time on cpu depend on the cpu, because i run on a better cpu ,it takes about 4min49s, but the worse one , it takes about 12min25s