jantic / DeOldify

A Deep Learning based project for colorizing and restoring old images (and video!)
MIT License
17.97k stars 2.56k forks source link

Been trying to get this to work for a few months now.. Always get some error attempting to run. Followed instructions. #468

Closed jdc4429 closed 1 year ago

jdc4429 commented 1 year ago

Been trying to get this to work for a few months now.. Always get some error attempting to run. Followed instructions. Created new Conda Deoldify environment per instructions.

Please see attached screenshot of the error. UnpicklingError: invalid load key, '<'.

Deoldify Error

jdc4429 commented 1 year ago

Sorry, Saw the comment about Windows not being supported. I resolved my issue by installing on Ubuntu 22.04. Also a note for people on Ubuntu, I found the GPU did not work outside the notebook. Would run from CPU and take around 18 hours! If I ran jupyter lab then the GPU was recognized. Takes about 62-65 minutes for same 15 minute video on RTX2070

jantic commented 1 year ago

Thanks for the update. I am curious though, can you post the script you ran where it ran on CPU instead of GPU? I suspect I may be able to identify the problem quite readily.

jdc4429 commented 1 year ago

I believe it was this script.. I made a few.. Since I was copying from the notebook I am thinking I just missed something or the notebook does some initialization you don't get from terminal. I quite frankly was just happy I finally got it working. :) Thanks for the great tool. If your curious as to my use, right now I am colorizing all The Three Stooges episodes and some old B&W movies for my YouTube channel.

You can check it out on my channel at: https://www.youtube.com/channel/UC_KcaNqHfYGedpSdsEmi6tw

The Three Stooges playlist for colorized episodes: https://www.youtube.com/playlist?list=PLJ3OakWWbaTEU193JRdSi2DWKLSrMvSYf

Script attached as yt.txt. Curious why it doesn't work in Windows. I did get some Windows error after executing once I got everything setup correctly. Seems the library is not functioning under Windows correctly?

yt.txt

jantic commented 1 year ago

Awesome that you're doing that project!

Well, I can tell you that you'll definitely need to change the ordering of the imports. This is admittedly tricky. Here's the imports, corrected. Having this ordered incorrectly may be causing it to go to CPU instead of GPU.

from deoldify import device
from deoldify.device_id import DeviceId
#choices:  CPU, GPU0...GPU7
device.set(device=DeviceId.GPU0)
import fastai
from deoldify.visualize import *
import torch

As far as whether it'll work in Windows- it can work but it's not "officially" supported. Meaning enter at your own risk. I've used it on Windows myself as is.

jdc4429 commented 1 year ago

I would consider it a benefit, not a bug. Takes forever, but someone with just a CPU and no GPU can at least try it if they wish. :) Maybe add some comment to the code letting people know they can run CPU. The single image conversion would be no biggie time wise even for CPU as well. Hmm.. When I tried to run Windows, I got some error window from windows pop up like from a library, not from python... Maybe I will investigate a little more. I have almost everything working under Ubuntu now anyways since I was sorta forced to run Ubuntu to do the conversions. If I could just figure out the permissions to get Jellyfin to see my external drives I would be running everything I am via Win 10. :)

jdc4429 commented 1 year ago

Have one more question... In the code you provided above...

from deoldify import device from deoldify.device_id import DeviceId

choices: CPU, GPU0...GPU7

device.set(device=DeviceId.GPU0) import fastai from deoldify.visualize import * import torch

Can you set multiple GPU's? I actually just ordered a K80 to go along with my RTX2070.

jdc4429 commented 1 year ago

Hey Jason,

I can't seem to find any documentation on whether you can run on multiple GPU's when running DeOldify. Is it possible to set up DeOldify to run from more than one GPU? Possibly running lightning?

I also noticed the image conversions seem to have better quality then the video conversions. I do see at times during the videos where the face, or top of head with a hat or hand still show in gray through some of the frames. Are there any other things that I can do to get a better video quality? Not really seeing any difference with changing from the default setting of

  1. It takes quite a bit of time to convert 200 videos so I would like to get the best speed and quality possible before I spend another 200 hours converting Three Stooges videos.

Regards,

Jeff

On Mon, May 8, 2023 at 6:59 PM Jason Antic @.***> wrote:

Awesome that you're doing that project!

Well, I can tell you that you'll definitely need to change the ordering of the imports. This is admittedly tricky. Here's the imports, corrected. Having this ordered incorrectly may be causing it to go to CPU instead of GPU.

from deoldify import device from deoldify.device_id import DeviceId

choices: CPU, GPU0...GPU7

device.set(device=DeviceId.GPU0) import fastai from deoldify.visualize import * import torch

As far as whether it'll work in Windows- it can work but it's not "officially" supported. Meaning enter at your own risk. I've used it on Windows myself as is.

— Reply to this email directly, view it on GitHub https://github.com/jantic/DeOldify/issues/468#issuecomment-1539032605, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEVP2RESCDH6Y4QRQMUMWW3XFF3EXANCNFSM6AAAAAAXYK4F7Y . You are receiving this because you modified the open/close state.Message ID: @.***>

jdc4429 commented 1 year ago

Hey Jason,

I ran into another issue. For some reason I can't colorize from a file locally instead of using a YouTube URL. It runs, creates the colorized version but then right when it's done or sometime after the no audio file is created anyways it deletes the file and then gives an error message stating the file does not exist! It's really annoying because I have to wait several hours for the colorization before it gives me the error. Looking through the code I'm not seeing where it's being deleted from. I believe the issue is from the visualizer.py code it checks for the file and then states it's not there. Which it's not because it deleted it. No messages stating the encode failed.

If you need more information please let me know. I would have to wait for it to fail again to get a copy of the message. Basically though it appears to be deleting the file.

Regards,

Jeff

On Fri, May 12, 2023 at 9:47 AM Jeff Carleton @.***> wrote:

Hey Jason,

I can't seem to find any documentation on whether you can run on multiple GPU's when running DeOldify. Is it possible to set up DeOldify to run from more than one GPU? Possibly running lightning?

I also noticed the image conversions seem to have better quality then the video conversions. I do see at times during the videos where the face, or top of head with a hat or hand still show in gray through some of the frames. Are there any other things that I can do to get a better video quality? Not really seeing any difference with changing from the default setting of

  1. It takes quite a bit of time to convert 200 videos so I would like to get the best speed and quality possible before I spend another 200 hours converting Three Stooges videos.

Regards,

Jeff

On Mon, May 8, 2023 at 6:59 PM Jason Antic @.***> wrote:

Awesome that you're doing that project!

Well, I can tell you that you'll definitely need to change the ordering of the imports. This is admittedly tricky. Here's the imports, corrected. Having this ordered incorrectly may be causing it to go to CPU instead of GPU.

from deoldify import device from deoldify.device_id import DeviceId

choices: CPU, GPU0...GPU7

device.set(device=DeviceId.GPU0) import fastai from deoldify.visualize import * import torch

As far as whether it'll work in Windows- it can work but it's not "officially" supported. Meaning enter at your own risk. I've used it on Windows myself as is.

— Reply to this email directly, view it on GitHub https://github.com/jantic/DeOldify/issues/468#issuecomment-1539032605, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEVP2RESCDH6Y4QRQMUMWW3XFF3EXANCNFSM6AAAAAAXYK4F7Y . You are receiving this because you modified the open/close state.Message ID: @.***>

jantic commented 1 year ago

Can you set multiple GPU's? I actually just ordered a K80 to go along with my RTX2070.

I didn't set up the code to do this unfortunately. You could do it theoretically but it would require extra work and research that you would have to do.

I also noticed the image conversions seem to have better quality then the video conversions.

Yes, this is generally going to be the case due to the design of the models. The image models are going to be more colorful, and the video model is going to be more conservative (trade offs of interestingness of colors vs stability).

Not really seeing any difference with changing from the default setting of 21.

There's two ranges to try here depending on what you're looking for. If you go down to 12-15, you may see better colors; If you go to about 28-30, you'll probably get more stable results.

For some reason I can't colorize from a file locally instead of using a YouTube URL.

You'll want to run the VideoColorizer.ipynb notebook for this in Jupyter with a local installation. That has the ability to point to local files.. Specifically you set source_url to None and go with the convention described in the comment:

#NOTE:  Make source_url None to just read from file at ./video/source/[file_name] directly without modification
source_url=None
file_name = 'DogShy1926'
jdc4429 commented 1 year ago

Thank you for your reply. I kinda figured that would be the case regarding the GPU. I may look into lightning for multiple GPUs. Said it was easy to set up. But only so many hours in a day and so many amazing AI projects now. Basically any black and white show that is allowed on YouTube I'm going to colorize. There are a bunch! Going to take me a lot of time. Once I get the K80 installed I will be able to do 3 conversions at a time so that will speed it up even if I can't run them in parallel. I have also been downloading some of the image upscalers. Upscayl shows some amazing results. It actually fixes some of the blurriness in the colorized videos (as well as 4x the resolution) but was only made to do individual frames. Since DeOldify already splits the video into images, I may try adding upscayl to process each frame after the colorization. I did a test on a single frame and it was impressive. Thanks again for such a great project. Unfortunately it seems I can't attach pictures here to show the difference.