Closed Eric07110904 closed 2 years ago
Thank you for the report.
The trained reference_adain
model usually fails to colorize if the poses of the input and the reference are different. Judging from the validation results, the training have gone well, so I think that your trained model would succeed in colorization if the poses of the input and the reference are similar.
If you would like to colorize difficult cases, I recommend that you use reference_scft
instead of reference_adain
because reference_scft
is able to colorize the cases that reference_adain
fails to colorize like the results. I am sorry for the inconvenience.
Thank you for the report.
The trained
reference_adain
model usually fails to colorize if the poses of the input and the reference are different. Judging from the validation results, the training have gone well, so I think that your trained model would succeed in colorization if the poses of the input and the reference are similar.If you would like to colorize difficult cases, I recommend that you use
reference_scft
instead ofreference_adain
becausereference_scft
is able to colorize the cases thatreference_adain
fails to colorize like the results. I am sorry for the inconvenience.
@SerialLain3170
Very thank yor for the response.
Actually I allready trained a reference_scft
model using same dataset(14000 pair), but the result looks strange.
I only change the param.yaml
, and following picture is validation result.
train:
epoch: 1000
snapshot_interval: 10000
batchsize: 3
validsize: 3
dataset:
extension: ".png"
train_size: 256
valid_size: 384
color_space: "rgb"
line_space: "rgb"
line_method: ["xdog", "pencil", "blend"]
src_perturbation: 0.5
tgt_perturbation: 0.2
The result of validation looks has same color style, i don't know whether this condition is normal. Do you have any opinion about this condition? Thank you!
I think that the problem may come from the scale in random_crop. If you set dataset.train_size
256, I recommend that you set the scale from 288 to 384 instead of 384 to 512. Since the scale is hard coded, I am sorry for the inconvenience.
I think that the problem may come from the scale in random_crop. If you set
dataset.train_size
256, I recommend that you set the scale from 288 to 384 instead of 384 to 512. Since the scale is hard coded, I am sorry for the inconvenience.
Thank you for your reply.
I use default configuration about param.yaml
now, but the problem still exist.
train:
epoch: 1000
snapshot_interval: 2000
batchsize: 2
validsize: 3
dataset:
extension: ".png"
train_size: 384
valid_size: 512
color_space: "rgb"
line_space: "rgb"
line_method: ["xdog", "pencil", "blend"]
src_perturbation: 0.5
tgt_perturbation: 0.2
@staticmethod
def _random_crop(line: np.array,
color: np.array,
size: int) -> (np.array, np.array):
scale = np.random.randint(396, 512)
My dataset: https://www.kaggle.com/datasets/ktaebum/anime-sketch-colorization-pair Image: 3 x 512 x 512 Sketch: 3 x 512 x 512
Validation result (14000 iterations)
Is the validation result normal in few epochs?
It may depend on the degrees of perturbations or batch size. How about setting src_perturbation 0.2 and tgt_perturbation 0.05? The change corresponds to alleviating perturbations of reference images.
My validation result is the figure below at 40000 iterations. If the training goes well, the result looks like this.
Thank you!! The validation result looks well after tunning degrees of perturbations or batch size.
@SerialLain3170 Sorry, my english is poor. I train a reference_adain model (15 epochs), and test it using Adeliene GUI. However, In testing stage, the generated result looks very strange. What should I do to achieve to a better result?(still training to 100 epochs?)