Open woxue opened 4 years ago
Hi,
Unfortunately, at this time, I cannot share the trained model yet. Rather I can give you a guide to train the model.
firstly, you need to prepare the dataset. You can apply pilgram filters to the COCO dataset. Then pair the filtered image with the original COCO version.
If you train the model without any uncertainty, it will converge after 6 to 9 epochs with train loss 0.05~0.08. If you use this model to full filter transfer pipeline, validation PSNR would be around 24~25db. Here you can apply uncertainties and dataset augmentation by posing arbitrary color/brightness/contrast modification. Then you can see 25~26db and even more. Note that the train loss does not have to be lower, and it cannot exactly output the original image. The purpose of the DL model is to output mean-colored image.
2020년 10월 14일 (수) 14:51, woxue notifications@github.com님이 작성:
Thx for your great work and code sharing. Would you share the pretrain model to test on directly?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/jonghwa-yim/FilterStyleTransfer/issues/2, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD54GUWYHDYEZBOX2ZNW7ITSKU37NANCNFSM4SQEPVRA .
Hi, Unfortunately, at this time, I cannot share the trained model yet. Rather I can give you a guide to train the model. firstly, you need to prepare the dataset. You can apply pilgram filters to the COCO dataset. Then pair the filtered image with the original COCO version. If you train the model without any uncertainty, it will converge after 6 to 9 epochs with train loss 0.05~0.08. If you use this model to full filter transfer pipeline, validation PSNR would be around 24~25db. Here you can apply uncertainties and dataset augmentation by posing arbitrary color/brightness/contrast modification. Then you can see 25~26db and even more. Note that the train loss does not have to be lower, and it cannot exactly output the original image. The purpose of the DL model is to output mean-colored image. 2020년 10월 14일 (수) 14:51, woxue notifications@github.com님이 작성: … Thx for your great work and code sharing. Would you share the pretrain model to test on directly? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#2>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD54GUWYHDYEZBOX2ZNW7ITSKU37NANCNFSM4SQEPVRA . @jonghwa-yim hi, thanks for your sharing. When making a stylized train and test set with mscoco and pilgram, should I apply all filters to each image ,or just to randomly pick up one filter?
Hi, Unfortunately, at this time, I cannot share the trained model yet. Rather I can give you a guide to train the model. firstly, you need to prepare the dataset. You can apply pilgram filters to the COCO dataset. Then pair the filtered image with the original COCO version. If you train the model without any uncertainty, it will converge after 6 to 9 epochs with train loss 0.05~0.08. If you use this model to full filter transfer pipeline, validation PSNR would be around 24~25db. Here you can apply uncertainties and dataset augmentation by posing arbitrary color/brightness/contrast modification. Then you can see 25~26db and even more. Note that the train loss does not have to be lower, and it cannot exactly output the original image. The purpose of the DL model is to output mean-colored image. 2020년 10월 14일 (수) 14:51, woxue notifications@github.com님이 작성: … Thx for your great work and code sharing. Would you share the pretrain model to test on directly? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#2>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD54GUWYHDYEZBOX2ZNW7ITSKU37NANCNFSM4SQEPVRA . @jonghwa-yim hi, thanks for your sharing. When making a stylized train and test set with mscoco and pilgram, should I apply all filters to each image ,or just to randomly pick up one filter?
Hi, I applied all filters to each image, resulting in 26 images per original image.
Hi, Unfortunately, at this time, I cannot share the trained model yet. Rather I can give you a guide to train the model. firstly, you need to prepare the dataset. You can apply pilgram filters to the COCO dataset. Then pair the filtered image with the original COCO version. If you train the model without any uncertainty, it will converge after 6 to 9 epochs with train loss 0.05~0.08. If you use this model to full filter transfer pipeline, validation PSNR would be around 24~25db. Here you can apply uncertainties and dataset augmentation by posing arbitrary color/brightness/contrast modification. Then you can see 25~26db and even more. Note that the train loss does not have to be lower, and it cannot exactly output the original image. The purpose of the DL model is to output mean-colored image. 2020년 10월 14일 (수) 14:51, woxue notifications@github.com님이 작성: … Thx for your great work and code sharing. Would you share the pretrain model to test on directly? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#2>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD54GUWYHDYEZBOX2ZNW7ITSKU37NANCNFSM4SQEPVRA . @jonghwa-yim hi, thanks for your sharing. When making a stylized train and test set with mscoco and pilgram, should I apply all filters to each image ,or just to randomly pick up one filter?
Hi, I applied all filters to each image, resulting in 26 images per original image.
@jonghwa-yim thanks, so the format of trainSet text file can be like this?
000000.png 000000-filter-0.png 000000.png 000000-filter-1.png ..... 000000.png 000000-filter-25.png 000001.png 000001-filter-0.png ..... @jonghwa-yim
Thx for your great work and code sharing. Would you share the pretrain model to test on directly?