Closed Archviz360 closed 3 years ago
here is an image that shows the error
I was able to partialy resolve this by placing the models in models directory under the root DeOldify folder.
But now I am facing another issue when running the colorizer = get_image_colorizer(artistic=True)
cell:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-9-d41e8163fe4e> in <module>
----> 1 colorizer = get_image_colorizer(artistic=True)
/mnt/_500GB/Work/Personal/DeOldify/deoldify/visualize.py in get_image_colorizer(root_folder, render_factor, artistic)
395 ) -> ModelImageVisualizer:
396 if artistic:
--> 397 return get_artistic_image_colorizer(root_folder=root_folder, render_factor=render_factor)
398 else:
399 return get_stable_image_colorizer(root_folder=root_folder, render_factor=render_factor)
/mnt/_500GB/Work/Personal/DeOldify/deoldify/visualize.py in get_artistic_image_colorizer(root_folder, weights_name, results_dir, render_factor)
418 render_factor: int = 35
419 ) -> ModelImageVisualizer:
--> 420 learn = gen_inference_deep(root_folder=root_folder, weights_name=weights_name)
421 filtr = MasterFilter([ColorizerFilter(learn=learn)], render_factor=render_factor)
422 vis = ModelImageVisualizer(filtr, results_dir=results_dir)
/mnt/_500GB/Work/Personal/DeOldify/deoldify/generators.py in gen_inference_deep(root_folder, weights_name, arch, nf_factor)
86 )
87 learn.path = root_folder
---> 88 learn.load(weights_name)
89 learn.model.eval()
90 return learn
~/miniconda3/envs/deoldify/lib/python3.7/site-packages/fastai/basic_train.py in load(self, file, device, strict, with_opt, purge, remove_module)
275 if with_opt: warn("Saved filed doesn't contain an optimizer state.")
276 if remove_module: state = remove_module_load(state)
--> 277 get_model(self.model).load_state_dict(state, strict=strict)
278 del state
279 gc.collect()
~/miniconda3/envs/deoldify/lib/python3.7/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
752 load(child, prefix + name + '.')
753
--> 754 load(self)
755
756 if strict:
~/miniconda3/envs/deoldify/lib/python3.7/site-packages/torch/nn/modules/module.py in load(module, prefix)
750 for name, child in module._modules.items():
751 if child is not None:
--> 752 load(child, prefix + name + '.')
753
754 load(self)
~/miniconda3/envs/deoldify/lib/python3.7/site-packages/torch/nn/modules/module.py in load(module, prefix)
750 for name, child in module._modules.items():
751 if child is not None:
--> 752 load(child, prefix + name + '.')
753
754 load(self)
~/miniconda3/envs/deoldify/lib/python3.7/site-packages/torch/nn/modules/module.py in load(module, prefix)
750 for name, child in module._modules.items():
751 if child is not None:
--> 752 load(child, prefix + name + '.')
753
754 load(self)
~/miniconda3/envs/deoldify/lib/python3.7/site-packages/torch/nn/modules/module.py in load(module, prefix)
750 for name, child in module._modules.items():
751 if child is not None:
--> 752 load(child, prefix + name + '.')
753
754 load(self)
~/miniconda3/envs/deoldify/lib/python3.7/site-packages/torch/nn/modules/module.py in load(module, prefix)
747 local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
748 module._load_from_state_dict(
--> 749 state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)
750 for name, child in module._modules.items():
751 if child is not None:
~/miniconda3/envs/deoldify/lib/python3.7/site-packages/torch/nn/modules/module.py in _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)
679 """
680 for hook in self._load_state_dict_pre_hooks.values():
--> 681 hook(state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)
682
683 local_name_params = itertools.chain(self._parameters.items(), self._buffers.items())
~/miniconda3/envs/deoldify/lib/python3.7/site-packages/torch/nn/utils/spectral_norm.py in __call__(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs)
164 if version is None or version < 1:
165 with torch.no_grad():
--> 166 weight_orig = state_dict[prefix + fn.name + '_orig']
167 weight = state_dict.pop(prefix + fn.name)
168 sigma = (weight_orig / weight).mean()
KeyError: 'layers.3.0.0.weight_orig'
Have you encountered this error too?
It looks like the jupyter lab
is downloading wrong/old files, downloaded file has ~83MB, but the file downloaded in Collab from https://data.deepai.org/deoldify/ColorizeArtistic_gen.pth
is ~243MB.
Downloading and placing the ColorizeArtistic_gen.pth
in models folder fixed the errors.
Everything is working now.
EDIT: Just realised this thread is for collab artictic=False
. I got the same error as OP in jupyter lab. Maybe Collab version is downloading wrong files for Stable
model?
it would be a lot easier to have a graphic UI version. since I would like to run this on an offline server it would be a lot easier for me. i could actually pay if there was a UI version of this cool ai. for now I'm using the online version but hopes to be able to use it offline as software.
@Archviz360 That's simply way beyond the scope of what we're willing to provide for free in this open source project. There is the MyHeritage version which is paid and much more advanced, and more generally speaking any of our efforts going forward in bringing this stuff in a user friendly way is going to be in that direction.
For convenience, I'll repost what we have in the readme at the bottom concerning our stance:
A Statement on Open Source Support We believe that open source has done a lot of good for the world. After all, DeOldify simply wouldn't exist without it. But we also believe that there needs to be boundaries on just how much is reasonable to be expected from an open source project maintained by just two developers.
Our stance is that we're providing the code and documentation on research that we believe is beneficial to the world. What we have provided are novel takes on colorization, GANs, and video that are hopefully somewhat friendly for developers and researchers to learn from and adopt. This is the culmination of well over a year of continuous work, free for you. What wasn't free was shouldered by us, the developers. We left our jobs, bought expensive GPUs, and had huge electric bills as a result of dedicating ourselves to this.
What we haven't provided here is a ready to use free "product" or "app", and we don't ever intend on providing that. It's going to remain a Linux based project without Windows support, coded in Python, and requiring people to have some extra technical background to be comfortable using it. Others have stepped in with their own apps made with DeOldify, some paid and some free, which is what we want! We're instead focusing on what we believe we can do best- making better commercial models that people will pay for. Does that mean you're not getting the very best for free? Of course. We simply don't believe that we're obligated to provide that, nor is it feasible! We compete on research and sell that. Not a GUI or web service that wraps said research- that part isn't something we're going to be great at anyways. We're not about to shoot ourselves in the foot by giving away our actual competitive advantage for free, quite frankly.
We're also not willing to go down the rabbit hole of providing endless, open ended and personalized support on this open source project. Our position is this: If you have the proper background and resources, the project provides more than enough to get you started. We know this because we've seen plenty of people using it and making money off of their own projects with it.
Thus, if you have an issue come up and it happens to be an actual bug that having it be fixed will benefit users generally, then great- that's something we'll be happy to look into.
In contrast, if you're asking about something that really amounts to asking for personalized and time consuming support that won't benefit anybody else, we're not going to help. It's simply not in our interest to do that. We have bills to pay, after all. And if you're asking for help on something that can already be derived from the documentation or code? That's simply annoying, and we're not going to pretend to be ok with that.
As far as using the Colab for artistic=False, that's the "stable" model and there's a separate Colab for that called ImageColorizerColabStable. Link to Colab
I'm a bit confused as to what you're trying to do running it locally but you can just do it through that link and run it with Google's servers. They do have the option to run it locally too via "connect to local runtime" but I honestly haven't tried that (never needed to).
I have tried to configure my windows so i can run collab connected to my windows but when im doing the final install step colorizer = get_image_colorizer(artistic=False)
i get this error
Traceback (most recent call last)