Elsaam2y / DINet_optimized

An optimized pipeline for DINet reducing inference latency for up to 60% 🚀. Kudos for the authors of the original repo for this amazing work.
104 stars 17 forks source link

Optimizing DINet_optimized #21

Closed ajeema closed 9 months ago

ajeema commented 9 months ago

First off, great job on this project. I was averaging about 180 seconds to perform the inference on test12. I noticed some optimizations that could be done in wav2vec.py, wav2vecDS.py, and infference.py. I still have more to do on the inference.py but I'm now completing the inference in 64 seconds. I'm using an older GPU (RTX 2060). So with a newer GPU, this may be able to get closer to real-time. I also added a check if frames have already been extracted and to bypass frame extraction. This suits my use case where I want to run inference repeatedly on a few videos only. When skipping extraction, I can now process in 36 seconds. If interested I can share the code.

Elsaam2y commented 9 months ago

Many thanks @ajeema. Please feel free to share or one a PR.

oijoijcoiejoijce commented 8 months ago

+1