Open ziyanxzy opened 3 months ago
All CPU, onnx model can run , TensorRT model need GPU @ziyanxzy
intel arc770 etc can we use intel gpu?
@ziyanxzy you can test and if have any bug , please let me know
how can i set to run on cpu?
because i find it always run on tensorrt instead of cpu
@ziyanxzy you have to remove flag --run_time, it'll run by onnx model
i do that but still can not run on cpu
This is run with onnxruntime-gpu, if you want to cpu only, you have to install requirements-cpu.txt
yes i install requirement-cpu,but onnxruntime-gpu in the requirements
@ziyanxzy so sorry, just remove it and replace onnxruntime only , I'll update later
if i uninstall onnxruntime, so i guess it can not run on cpu without any code change?
@ziyanxzy onnxruntime is for CPU and onnxruntime-gpu is for GPU You need to pip uninstall onnxruntime-gpu and install again by pip install onnxruntime (CPU only)
oh thank you ~, but where is the output video? i can not found the output..
@ziyanxzy it's in animations folder , it's auto create it
yes, i use this cmd, but can not create a animations folder. python run_live_portrait.py --driving_video "/home/sas/Desktop/valencia/Efficient-Live-Portrait/experiment_examples/examples/driving/d0.mp4" --source_image "/home/sas/Desktop/valencia/Efficient-Live-Portrait/experiment_examples/examples/examples/source/s9.jpg" --task ['video']
@ziyanxzy Have you check in the source code, it auto generate in this, also I'm update new feature and fixing the requirements.txt file, you can git clone again
@ziyanxzy Our not have issues with it , please capture your tree project for me
still can not generate i use example driving video and image
@ziyanxzy Please capture your inside Efficient-Live-Portrait folder, if not you need to check here for make sure the path is where File code: fast_live_portrait_pipeline.py
If you run it succesfully, it'll print you the path to save output
I'm running on colab or local not have issues
show me the log when finished running
@ziyanxzy --task not is a list, it's a option image, video or webcam , --task image not --task ['image']
ok i see. i see the code and fint out the task is not a list but a str. now it succeed!!!thank you very much. but i think it will be a little easy to confuse, because your example is a list. ^^
@ziyanxzy Haha , I'll fixed it in readme , so sorry my friend
@ziyanxzy If you run with tensorrt, please make sure your latest ubuntu and package, and it cannot run on window with TensorRT . Noted this my friend
it doesnt matter~would be run on windows in the future? , and i also want to try run on igpu (intel), and deployment
@ziyanxzy If you have window os, Please help me build plugin for TensorRT again on window and pull request . Thank you
@ziyanxzy BTW, I don't like window more haha 😂 🤣
if i use tensorrt it will run on cuda, but i want to run on windowscpu or igpu in my pc. i use cpu to run i think it takes not a long time. haha. i also didnt like windows!
@ziyanxzy Cool !, you can send me the validation speed run time of your computer when finished ? . I want to adding the validation board in README for clearly about speed
let me try on windows cpu, i think if we want to run on igpu on windows, it may need to convert model to openvino. i will try windows cpu first and give you speed runtime. ^^ thank you . btw , why the onnx run on cpu much faster than python run on cpu. i try to run pytorch model on cpu, but it takes a long time!!!
@ziyanxzy Cool, I don't have plan to convert to OpenVINO because I'm using more on MacOS, but if you converted successfully, can you pull request on my code to updated feature for it ?
@ziyanxzy because onnx is built to run on cpu with back-end api is C++ , and pytorch is not , it’ll take all core of your CPU to run, onnx just take half core
i think the inference time on cpu depend on the cpu, because i run on a better cpu ,it takes about 4min49s, but the worse one , it takes about 12min25s
hi , i see you can run on mac m1 cpu, but it can run on intel cpu or not?