Closed Return-vo1d closed 2 months ago
We use the code from DeepFill. You can use this github to build the basic environment. For the attacks in the paper, you can find in RobustBench and torchattacks.
Thank you very much for your previous response. I have one more question. I noticed that you're using a pre-trained model from DeepFill, but since DeepFill is based on TensorFlow and your training code is in PyTorch, do I need to convert the DeepFill pre-trained model to a format that PyTorch can recognize? Alternatively, could you kindly provide the pre-trained model you're using? I would greatly appreciate it!
Thank you very much for your previous response. I have one more question. I noticed that you're using a pre-trained model from DeepFill, but since DeepFill is based on TensorFlow and your training code is in PyTorch, do I need to convert the DeepFill pre-trained model to a format that PyTorch can recognize? Alternatively, could you kindly provide the pre-trained model you're using? I would greatly appreciate it!
Sorry for some oversights in information verification. We have now updated the relevant information: we use the code from deepfillv2-pytorch, a PyTorch reimplementation of DeepFillv2 based on the original TensorFlow implementation.
Hello, I used the deepfill pre-trained model you provided and the config file that came with the project to train for 250 epochs on a 3090 GPU. Although the g_loss has been generally decreasing, the generated images throughout the training are very poor, and the masked areas are mostly filled with repetitive patterns. Have you encountered a similar problem before?
Thanks for your continued interest in our work. In response to your question, the answer is YES. The generated images (i.e., purified examples) may look poor visually.
Could you please provide the environment information for the model?