Open mjtorn opened 2 years ago
For what it's worth, I advanced a bit, and realized those models are old. https://github.com/xinntao/BasicSR/blob/master/docs/ModelZoo.md has a link that leads to https://drive.google.com/drive/folders/11EDB_YuHQmcbUFm2GE0xcekvZ2b7YOT- (which will probably be an invalid link at some point, if the pre-trained models are updated).
Unexpected key(s) in state_dict: "body.6.rdb1.conv1.weight", "body.6.rdb1.conv1.bias", ...
is the new issue, but I think there's something about that to be found online, so unless you have a quick solution, I'll try to find a fix.
I just hope I'm not completely off the mark here.
You should use the anime specific model, thats what I use as well https://github.com/xinntao/Real-ESRGAN/blob/master/docs/anime_model.md, or if you want to use command line
wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth
, however any model should work, thanks for bringing it up I'll try and find out whats going wrong
Currently only works with anime specific model found at https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth
It works!
The machine I'm running it on is slow as balls to do this, which makes me think maybe I should actually see if my laptop would have gpu superpowers and be less slow, though I'm also not in a real hurry.
I wonder what the results will look like, because it's not strictly anime I'm upscaling. It's this kids'/family cartoon (drawn in Japan!) show Alfred J. Kwak, which no one probably recognizes (unless they're Dutch or Finnish and of a specific age). The DVDs are highly collectible despite the near-unwatchable mastering quality. HandbrakeCLI -5 -8
does a decent job of trying to clean it up, combined with some options I use on trust and never really investigated.
It's this stuff https://gitlab.com/mjtorn/dvdwhine/-/blob/master/dvdwhine.py
Anyway, thanks for the help, and it would be nice to be able to try out other models if this one doesn't succeed :) I think anything with AI will be better, though Anime4k didn't really know what to do with it.
Thanks!
I need to figure out how the models work LOL, cuz I just used the real-esrgan library on pip. I am guessing the model from the other one does not match what the library requires, not exactly sure though. And doing this without GPU is quite unrealistic, even with a RTX 3080 it took a solid 10hrs to upscale a 25 min anime from 360p to 1440p. With a cpu it would have taken days.
It would be interesting to hear your findings :)
And, yeah, of course CPU is a no-go beyond this experiment.
Still my big problem is that the video turned out not so much a video as a black screen. That's likely not due to using the CPU as single big frames are actual pictures.
The price and availability of RTXes (or pretty much anything else)
is really such, that if I got my output video to see first, and
I'd decide to pursue upscaling this obscure kids'/family show from
ages ago, I might just run anime_upscaler
on some rented system.
It'll take a while for me to have time to investigate, but I'll let you know if I find anything on why the video didn't work out, and please drop a line here if you get something on the models!
PS. Some of these target resolutions are crazy. I'd be perfectly content with fullhd, but maybe downscaling after the fact can also serve as noise reduction. Maybe. Maybe we'll see.
Cheers!
On Fri, Mar 11, 2022 at 07:46:21AM -0800, Shangar Muhunthan wrote:
I need to figure out how the models work LOL, cuz I just used the real-esrgan library on pip. I am guessing the model from the other one does not match what the library requires, not exactly sure though. And doing this without GPU is quite unrealistic, even with a RTX 3080 it took a solid 10hrs to upscale a 25 min anime from 360p to 1440p. With a cpu it would have taken days.
— Reply to this email directly, [1]view it on GitHub, or [2]unsubscribe. Triage notifications on the go with GitHub Mobile for [3]iOS or [4]Android. You are receiving this because you authored the thread.Message ID: @.***>
References
Visible links
- https://github.com/shangar21/anime_upscaler/issues/3#issuecomment-1065234840
- https://github.com/notifications/unsubscribe-auth/AAATEW6KWQUNXH6EJIYZG3LU7NTE3ANCNFSM5QKH6I2Q
- https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
- https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub
Is there GPU support for apple sillicon?
Depends on your pytorch version, I believe if torch.cuda.is_available()
is True, it should work.
Seems like it's not exactly CUDA, I can look into making it compatible with Apple silicone, there seems to be documentation here: https://developer.apple.com/metal/pytorch/
Seems like it's not exactly CUDA, I can look into making it compatible with Apple silicone, there seems to be documentation here: https://developer.apple.com/metal/pytorch/
Thanks for the quick reply, I've tried the steps on this link, but its still not working correctly. For me there's a bottleneck on the upscale step, specifically on the upscale_slice step. I left a 20min, SD file converting overnight on an m1 macbook pro and it only did 4% overnight.
With a 3080ti it takes like 8hrs for a 20min anime episode, if it's using CPU it'll take super long (if I remember correctly upscaling a 5 second clip on cpu took almost an hour, with 8 cores 16 threads)
I'll have to look into implementing Apple silicone, gotta find a laptop with it first and try.
I'm willing to help benchmarking the scripts on apple sillicon. I don't have a strong base on python but I've worked with openCV before
Ok so I made a branch that checks for the mps device as well and should use that if its available ( I think) give it a look see and lmk if it works. Branch name is macos_gpu
,
First off, I'm completely new to all of this, just playing around, so bear with me.
I was trying
python anime_upscaler.py -m ~/src/ESRGAN_models/RRDB_ESRGAN_x4.pth [and the rest]
, which I took to be one of the pre-trained models from https://github.com/xinntao/BasicSR/issues/253 - or https://drive.google.com/drive/folders/17VYV_SoZZesU6mbxz2dMAIccSSlqLecY as linked in the BasicSR issue.Is this at all correct? I doubt it is, because it complains about missing
params
, so I wonder if this script requires something in a completely different format?