Open jeffreylm opened 1 year ago
All you need to do is enable the setting to switch to integrated python and install the ROCm version of pytorch manually through a terminal. However, right now this is only supported on Linux, though apparently they're working on making it available for windows as well. But right now it's just limited to a small percentage of AMD cards and only works on Linux.
Also, please don't give me chatgpt advice in a GitHub issue... It told you that using tensorflow was a way to run pytorch which is quite frankly hilarious.
Also, AMD users typically should be using NCNN in chaiNNer instead of pytorch. If you can get ROCm working, that's the best way for sure. But otherwise, just use NCNN. You can convert most of the supported pytorch models to NCNN and run them the same way.
Is there somewhere I can read to step by step make that conversion? I just discovered Chainner yesterday. So that give you an idea of how new I am to the software. I'm on mac.
I am using Upscayl, which is blazing fast, but chainner will offer me options to do things I can't with UpScayl
If i remember correctly, upscayl is using NCNN model conversions under the hood, which explains why it's fast.
As for how to convert the models, install all the dependencies from the dependency manager, and then use the convert to NCNN node under the pytorch category to convert the model to NCNN. So your chain would be pytorch Load Model -> Convert To NCNN -> NCNN Save Model. Then after you have that saved (i do recommend saving them rather than converting each time you want to use it), in a separate chain use the NCNN Load Model node to load the NCNN models. From there use the NCNN Upscale Image node and whatever other nodes you want
I am watching a tutorial on grid splitting https://www.youtube.com/watch?v=s1Fhs98cTjI This application is insanely powerful and insanely well-thought-out. Hats off to the creator, and if it is you, hats off to you.
And thank you for the explanation of how to convert.
On a diferent note, I am teaching myself python, and I want to learn how to make a GUI that uses nodes like how chainner, blender and other do it.
Where do I start? I have search the internet, but I don't know what term to use to find knowledge on how to build a GUI that uses nodes like Chainner.
Thank you in advance
chaiNNer doesn't use python for the UI. However, i do know of this library you could try using, though i have never used it myself: https://github.com/jchanvfx/NodeGraphQt
Thank you for that link, that is exactly what I needed.
One thing I wanted to know. Does Chainner do any kind 72dpi to 300dpi. As I have found that, no one really talks about the dpi aspect of scaling. I want to take Midjourney images (72dpi) and convert to 300dpi.
We don't support saving that kind of metadata. Dpi isn't really something intrinsic to the image, but rather extra metadata on top, and we don't handle metadata.
Hello Joey,
So I did the steps you said to convert the PyTorch model to NCNN. That went good. Then I take that model and setup the NCNN to use that saved model to do a upscale. I did it four times, and every time it locked up the mac. Is there a log file I can send?
Kind regards
Help > open logs folder
Try setting the tile size for the ncnn upscales to be 256
Hello Joey,
I did what you suggested, and the conversion went well. But using the converted NCNN model causes my computer to freeze 100% pf the time. I need to a hard reboot to get the computer back. I used two converted Pytorch models. That was Ultrasharp and Real-ESRGAN. Same end result.
Is there a log of this freeze? I'm on a mac (Monterey 12.6.5)
See the image below [Screenshot 2023-05-18 at 06.57.29.png]
Kind regards, Jeffrey
------- Original Message ------- On Monday, May 8th, 2023 at 20:02, Joey Ballentine @.***> wrote:
If i remember correctly, upscayl is using NCNN model conversions under the hood, which explains why it's fast.
As for how to convert the models, install all the dependencies from the dependency manager, and then use the convert to NCNN node under the pytorch category to convert the model to NCNN. So your chain would be pytorch Load Model -> Convert To NCNN -> NCNN Save Model. Then after you have that saved (i do recommend saving them rather than converting each time you want to use it), in a separate chain use the NCNN Load Model node to load the NCNN models. From there use the NCNN Upscale Image node and whatever other nodes you want
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I changed the tile size from Auto to 128. And now it works. Can you explain what that feature does? I have a Sapphire AMD RX580 8GB RAM, with 32 GB internal system memory
The tile size feature splits the image up into smaller tiles (with some overlap), uoscales then individually, and stitches them back together. This is often necessary on cards with really low VRAM. Unfortunately, NCNN kinda sucks and tends to crash the video driver when you run out of memory rather than throwing a catchable error, so that's probably what you were seeing. With it set to 128, you aren't going OOM, and therefore you aren't running into that issue anymore.
I have a Sapphire AMD RX580 8GB RAM
Just saw this. I have no clue why you needed to do that. just NCNN things i guess...
I have it on 192 tiles now, and that also works. When set to 512, it freezes the system up. So I am staying at 192 for now. I noticed you had autoSize. Maybe you could have it setup to degrade/lower tiles by each freeze it experiences, and give the user a notice. Maybe even a warning message when someone pulls in the node that if they experience freezing up, try lowering the tileSize to 128, and test gradually upwards.
Does tiling speed up anything?
Also thank you for the input and fast response.
Maybe you could have it setup to degrade/lower tiles by each freeze it experiences
This is what it's supposed to do, and actually what it does do for me. It seems that most AMD cards don't properly handle the error it throws, and it explodes for the user. There really isn't anything i can do about that with my current knowledge of C++ and Vulkan (i would have to fix that on the NCNN side, and i don't even know if that's possible)
Using smaller tile sizes is slower. But it's better than it not working at all
When trying to use GPU mode with Rocm (which otherwise works, I'm currently training a model with it) Chainner says "TypeError: Failed to fetch". I can't get around this issue, all the dependencies seem to be installed.
Chainner says "TypeError: Failed to fetch".
This simply means that the UI couldn't connect to the backend. So the backend likely crashed because chainner doesn't support Rocm.
I think Rocm works, someone once said they did get it to work with chaiNNer. but I don't have experience with it enough to know if there's anything special you need to do besides using system python and installing the right version
I got it working using this method, it's rough but works.
I've got a slightly better approach than the above which works on this end.
NOTE: This works with the current version of ChaiNNer as of writing, which requires Torch 2.1.2
- future versions might need a newer version of Torch, and as such the URL needed may change - check the required version of Torch
ChaiNNer needs, and consult this list to find the correct URL to get that version for ROCm.
Advanced
and change Installation Mode to Manual / Copy
https://download.pytorch.org/whl/rocm5.6
Motivation The entire mac osx doesn't get to use pytoch and your application seems really focussed on Pytorch
Description I would like you to implement software to allow Pytorch to use AMD GPUs. I found this https://stackoverflow.com/questions/70258079/set-pytorch-to-run-on-amd-gpu
There must be someone out there who ha already solved this issue.
I also asked chatGPT and this is what it said.
Running PyTorch on AMD GPUs requires a bit of extra setup as PyTorch was originally designed to work with NVIDIA GPUs. However, there are now several options for running PyTorch on AMD GPUs:
It's worth noting that not all PyTorch features are fully supported on AMD GPUs, and performance may vary depending on the specific AMD GPU and system configuration. Therefore, it's important to check the documentation and system requirements before choosing an option.
Alternatives A clear and concise description of any alternative solutions or features you've considered, if applicable.