Closed WestbrookZero closed 1 month ago
Hello Thanks for your suggestions.
By default, I only uploaded the static executable file of ffmpeg under Linux. If you want to use it under Windows, you can download the ffmpeg package under Windows and put it in the project folder. Finally, replace all the 'ffmpeg_lib/ffmpeg' in inference.py with the path of your local ffmpeg executable file.(like this ffmpeg-master-latest-win64-gpl\bin\ffmpeg.exe) The download link of ffmpeg under Windows is as follows: https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-win64-gpl.zip
if you have any question , please tell me. (^_^)
how to install ffmpeg shown here for windows : https://youtu.be/-NjNy7afOQ0
12:34 How to check if FFmpeg is installed and set on Windows 12:45 How to install and setup FFmpeg on Windows 13:40 How to add downloaded FFmpeg exes to the system environment variables path 14:18 How will system look system variables path with which order, which Python version will get used 14:36 How to verify if your FFmpeg setup is working correctly or not
You don't need to use the environment variable , I recommend using the absolute path to ffmpeg.exe in inference.py directly.
How long does it take to infer this example on a 4090 gpu on Windows?
it spents 21 seconds to infer this example with 3 seconds and 30fps. I abandoned the pipe on Windows, I modified the pipeline to normal opencv video reading and writing.
` def process_video_opencv(input_path, output_path, model): cap = cv2.VideoCapture(input_path) if not cap.isOpened(): print("Error opening video file") return width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) fps = cap.get(cv2.CAP_PROP_FPS)
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter(output_path, fourcc, fps, (width, height))
frame_buffer = []
while True:
ret, frame = cap.read()
if ret:
frame_buffer.append(frame)
if len(frame_buffer) == 3:
processed_frame = apply_net_to_frames(frame_buffer, model)
out.write(processed_frame)
frame_buffer.pop(0)
else:
break
if len(frame_buffer) == 2:
frame_buffer.append(frame_buffer[-1])
processed_frame = apply_net_to_frames(frame_buffer, model)
out.write(processed_frame)
cap.release()
out.release()
`
https://github.com/kepengxu/PGTFormer/blob/main/easy_inference_in_windows.py this inference code can solve it
D:\AI\Anaconda3\envs\pgtformer02Py3.10\python.exe D:\AI\CV\Face\PGTFormer\inference.py D:\AI\Anaconda3\envs\pgtformer02Py3.10\lib\site-packages\torch\functional.py:512: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3588.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Working with z of shape (1, 256, 32, 32) = 262144 dimensions. ffmpeg_input= ['ffmpeg', '-i', 'assets/inputdemovideo.mp4', '-f', 'image2pipe', '-pix_fmt', 'rgb24', '-vcodec', 'rawvideo', '-'] Traceback (most recent call last): File "D:\AI\CV\Face\PGTFormer\inference.py", line 165, in
process_video_ffmpeg(args.input_video, args.output_video, model)
File "D:\AI\CV\Face\PGTFormer\inference.py", line 42, in process_video_ffmpeg
pipe_out = subprocess.Popen(ffmpeg_output, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
File "D:\AI\Anaconda3\envs\pgtformer02Py3.10\lib\subprocess.py", line 971, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "D:\AI\Anaconda3\envs\pgtformer02Py3.10\lib\subprocess.py", line 1456, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] 系统找不到指定的文件。