m1guelpf / auto-subtitle

Automatically generate and overlay subtitles for any video.
MIT License
1.31k stars 222 forks source link

(GUIDE) Procedure for running on Windows 10/11 for NVidia GPU (AMD impossible for now) #52

Open codefaux opened 11 months ago

codefaux commented 11 months ago

(AMD users; as of mid July `23 there is no Windows / WSL option for AMD + PyTorch. See my comment for details. If someone pings me once it's released I'll update this to add AMD.)

Hey all. I had to go through a bit of extra to get this to run on Windows. I noticed a few issues (possibly #30, maybe #29) which are likely to be solved by a "clean/isolated quasi-Proper" install guide. Here's what worked for me. (I'm not covering installing ffmpeg -- I used chocolatey as linked from the main post and it was annoying, but I literally used the instructions at chocolatey.org's Install page)

Notes/justifications:

To start, you need Python. I'm using 3.10.6 but most versions should work, so long as they've been around a few weeks. If you don't have it, install it from the official Windows download site -- you can install it via Microsoft Store, but I am fundamentally against cli-from-Store apps for several reasons.

I'll describe the procedure three times; first by description. Second, by sequence of commands. Third, by screenshots.

By description:

Open a command prompt. Create your venv directory. Have Python populate it. I'm using \venv\auto-subtitle and my Windows drive is C: (as is most peoples' I'd imagine) -- I have multiple venv installations, so this is convenient for me.

Install torch as indicated here -- this is the PyTorch "Get Started Locally" guide, which describes how to install Torch for locally-run AI workloads. This entire guide pivots around that page.

Torch is the part of the Python stack which does the "AI work" and having a Torch version which supports your GPU is CRITICAL to making these things work. This process works with nearly any application which uses Torch.

Anyway -- from that page, select Stable, Windows, Pip, Python, CUDA 11.7, and run what "Run this command" tells you.

From there, install/run auto-subtitles. Deactivate the venv if you need the cli for other Python things, otherwise just close it.

By commands:

FIRST TIME: Start>Run>cmd

mkdir \venv\auto-subtitle
cd \venv\auto-subtitle
python -m venv .
Scripts\Activate.bat
pip3 install torch --index-url https://download.pytorch.org/whl/cu117
pip3 install git+https://github.com/m1guelpf/auto-subtitle.git

Use as directed

TO RUN AGAIN: Start>Run>cmd \venv\auto-subtitle\Scripts\Activate.bat Use as directed

TO EXIT VENV SO YOU CAN USE YOUR CLI FOR OTHER THINGS \venv\auto-subtitle\Scripts\Deactivate.bat

By screenshot:

image

image

image

EDIT: lol forgot the line which actually installs the git package, GO ME

bluepanda999 commented 11 months ago

Is there an AMD Radeon version of this guide? thanks.

codefaux commented 11 months ago

Is there an AMD Radeon version of this guide? thanks.

Disclaimer;

Anyway -- from that page, select Stable, Windows, Pip, Python, CUDA 11.7, and run what "Run this command" tells you.

For AMD, ROCm is the package which serves the function of CUDA. Selecting ROCm for Windows on the PyTorch Getting Started Locally page indicates that ROCm does not exist for Windows. This means AMD has not finished that software for Windows users.

This unfortunately extends to WSL, even v2. Unfortunately, AMD is far behind Nvidia in this type of workload. You would need to use pure Linux.

For the future; AMD's ROCm 5.6.x preview series supports ROCm in Windows (around April of `23) but PyTorch bindings for 5.4.x are the newest which currently exist. Once ROCm exist for Windows and PyTorch, the Getting Started Locally link will update and indicate a command, at which point replace the same command from my guide with the ROCm one. This will be the line to replace;

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

bluepanda999 commented 10 months ago

Tried this on Fedora 38 with ROCm beta 5.6, works fine. Might need to wait a bit for the Windows version though.

pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.6

codefaux commented 10 months ago

That's correct - the Linux version of ROCm has been known to work (obviously not with this project lol) since the ROCm bindings were first released for PyTorch 1.8 in early 2021.

As a comparison, PyTorch 0.3.0 from 2018 supports NVidia on Windows.

The Windows version of ROCm has not had PyTorch bindings, an announced ETA, or even announced specific plans to support Windows+ROCm+PyTorch, during that two and a half year period. In fact, the Windows version of ROCm hasn't been released at all yet; it's still in beta. The PyTorch people can't really even start working on it until AMD gets around to finalizing Windows support for ROCm.

You will need to wait indefinitely for the Windows version. Don't plan on it being short. Regarding Windows, AMD has yet to finish/release;

These things could be almost done, or they could be nowhere. AMD failed to invest resources in the Windows side of this market when it was early and growing. They didn't put a foot in the door at all until recently. They're very far behind, as they undervalued this market considerably.

If you're building a system and care at all about using your GPU for anything "AI", machine learning, or GPGPU related, reconsider AMD or plan to use real Linux. (Real Linux as in not-WSL -- using WSL does not improve this situation. I'm not knocking WSL, I'm a huge fan.)

If you already have a built system and wish to use your AMD GPU for this stuff, your only route now-or-soon is to set up Linux on a second drive, or set up a fast USB boot device with persistent storage and dual-boot. I recommend using the tool Rufus to create said medium, as it supports persistent partition creation.

Good luck!

bluepanda999 commented 10 months ago

Not sure what you meant by "not with this project." To clarify, I was able to get this project to work GPU accelerated (6900XT) using Fedora 38 and ROCm 5.6, using your guide above, tweaking for linux. I monitored CPU and GPU usage using radeontop and bpytop, CPU usage was low and GPU usage was 97%. Hopefully this helps others with AMD GPU.