Open My12123 opened 1 year ago
@My12123 I think I found a possible solution. It would be to add --no_dropout to the manga colorization, the question is how... if you get it, you avoid the problems with the architecture.
If you get the above errors when loading the generator during test time, you probably have used different network configurations for training and test. There are a few things to check: (1) the network architecture --netG
: you will get an error if you use --netG unet256
during training, and use --netG resnet_6blocks
during test. Make sure that the flag is the same. (2) the normalization parameters --norm
: we use different default --norm
parameters for --model cycle_gan
, --model pix2pix
, and --model test
. They might be different from the one you used in your training time. Make sure that you add the --norm
flag in your test code. (3) If you use dropout during training time, make sure that you use the same Dropout setting in your test. Check the flag --no_dropout
.
Note that we use different default generators, normalization, and dropout options for different models. The model file can overwrite the default arguments and add new arguments. For example, this line adds and changes default arguments for pix2pix. For CycleGAN, the default is --netG resnet_9blocks --no_dropout --norm instance --dataset_mode unaligned
. For pix2pix, the default is --netG unet_256 --norm batch --dataset_mode aligned
. For model testing with single direction (--model test
), the default is --netG resnet_9blocks --norm instance --dataset_mode single
. To make sure that your training and test follow the same setting, you are encouraged to plicitly specify the --netG
, --norm
, --dataset_mode
, and --no_dropout
(or not) in your script.
I managed to train 1 epoch and save the model, now I'm really testing it with 800 epoch (with a dataset of 1406 color images from One Piace) and if it goes well and looks good I'll share whit you how I did it. @My12123
@Keiser04 Okay, thanks
Do you know what meens this¿
hmmmmmmmmmmmmmmmmmmmmmmmm
@Keiser04 Were you taught on the pages of manga? If yes, then you need to color manga pages, not art. You can provide the model that you got, then I will be able to test more. There is only a contour on your photo, ControlNet will cope with this better, there should be shades of gray at the bottom of the color photo and black and white. the original sketch result
I didn't understand the controll net thing, maybe I'm taking too long since I'm specifying how the dataset has to be and how to use the codes I created to speed up the dataset. Anyway what I am doing is to pass the i
mages in black and white with python, since I don't know how to make it look like manga, well some of them do look like the original panels.
do you know how to use kaggle? if so, I'll pass you the notebook but I haven't made a tutorial or anything else yet
my biggest problem is that I don't know what you mean by dfm images.
I didn't understand the controll net
I'm about https://github.com/lllyasviel/ControlNet https://github.com/Mikubill/sd-webui-controlnet
Only the contour that is filled with gray is colored.
well wish me luck and see how it turns out, 18k images in total counting black and white 100 epoch in theory in 10 hours o
Only the contour that is filled with gray is colored.
Does the net control do that? or does it just grayscale it?
@Keiser04 I don't know for sure, I know that only the contour will be colored, which is filled with gray, with the exception of faces.
Results in ControlNet
The training mode is ussles
I can only believe that the model he gave us was trained with another type of ia @My12123
do you know python? i think the problem is that v1 doesn't have colorizartion.py while v2 has one. the thing is that the models were modified i think making the state dict static or something like that, if we could fix it. @My12123
@My12123 is my model, 30 images... 10 h of training....
https://github.com/zyddnys/manga-image-translator/issues/378