etched-ai / open-oasis

Inference script for Oasis 500M
MIT License
1.51k stars 126 forks source link

AMD GPU support #4

Open nonetrix opened 3 weeks ago

nonetrix commented 3 weeks ago

I got it working

Run this

pip3 install torch torchvision torchaudio torchvision --index-url https://download.pytorch.org/whl/rocm6.2

Then remove torchvision, torchaudio, torchvision from the file... Only problem, it's quite slow lol but that's just my GPU likely (rx 6800)

open-oasis on ξ‚  master [!?] via 🐍 v3.12.7 (venv)
❯ python generate.py
/home/noah/Documents/AI/open-oasis/generate.py:17: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  ckpt = torch.load("oasis500m.pt")
/home/noah/Documents/AI/open-oasis/generate.py:23: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  vae_ckpt = torch.load("vit-l-20.pt")
/home/noah/Documents/AI/open-oasis/generate.py:42: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  actions = one_hot_actions(torch.load(actions_path))
/home/noah/Documents/AI/open-oasis/venv/lib/python3.12/site-packages/torch/nn/modules/linear.py:125: UserWarning: Attempting to use hipBLASLt on an unsupported architecture! Overriding blas backend to hipblas (Triggered internally at ../aten/src/ATen/Context.cpp:296.)
  return F.linear(input, self.weight, self.bias)
 29%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ                                                         | 9/31 [02:27<06:58, 19.01s/it]

Could a official requirements_amd.txt or something be made?


image

https://github.com/user-attachments/assets/86655cd1-e7d4-45d4-998e-6c4cfeb3379e

nonetrix commented 3 weeks ago

If someone wants they can experiment with Intel Arc/OpenVINO and DirectML, seems easy to just replace dependencies and it just works*

tehfizla commented 2 weeks ago

can you make a more indepth tutorial on how to fixx this?

nonetrix commented 2 weeks ago

I already did. Run the first command, edit out the lines in requirements.txt, then install the remaining dependencies with pip install -r requirements.txt. This is however assuming you run Linux, on Windows things are much less developed (but more stable in my experience... Can't have best of both worlds seemingly)

If you are on Windows, WSL might work? But I honestly haven't tried AMD ROCm (AMD specific compute software) in WSL, I just know NVIDIA CUDA (same but NVIDIA) works. Your best bet would be trying to setup Microsoft DirectML (same but Microsoft and cross GPU), but you're on your own with that. But probably searching DirectML Pytorch install would probably point you to right direction to attempt yourself

Finally, additionally you might want to either use Conda or python -m venv venv and source ./venv/bin/activate to create a isolated environment for the things you will install to not cause issues in the future (on Arch Linux is is all but mandatory unless you override it). You do this before following the above steps and source the file each time you want to use this AI

If you have any issues doing that, don't mind helping if you explain in depth what confuses you or issues you have. If you want I can make a video explaining it perhaps, but I think that should be enough. Regardless, best of luck :)