Closed codaibk closed 6 years ago
As shown in FAQ, We use the default training/validation data split from Places2 and CelebA. For CelebA, training/validation have no identity overlap.
For your examples above, how do you crop the image? Why the image is not aligned?
You could have a try of our demo on CelebA-HQ, where there is 2000 validation images for you to try with any mask.
Hi, sorry for cropping the image like that. But actually, thats cropped image just using for posting here to ask question. This is really aligned my image using in testing
Output:
@codaibk The model of demo HQ and train/val split is also released. You can try whatever image you like. There is no need for users to upload since you can just try it with our released model. Also in this case, the inpainting model tries to hallucinate eye glasses, which is rather corner distribution in CelebA dataset. Hope it helps to your confusion.
@JiahuiYu My orriginal image without eye glasses, so it doesnt mean thats issue for CelebA dataset. I dont need to try online demo HQ because there is no function for me to test with new image which is not inside your dataset. So it doesnt solve my question because i am doubt about your model which you said it can generate the image from inpainting_image, but for my testing case, it didnt give good result. This is my original image which give bad result (FYI, not only this image returns bad result, the other images also)
Hi, is the testing image which you use in this project, is inside training image? Because you just take the image from celeba dataset which you trained with model to test your model Thats why the result look good, but if I test your model with new testing image, the result looks not good. Example: This is the result with your testing image which inside training dataset: Inpainting_image Result: But if i use image not inside your training dataset the result looks not good: Inpaingin_Image: Result: Groundtruth: