Open yaseryacoob opened 1 year ago
You may use the wrong testing data. Our model input is DIRE
but not image
.
Thanks, I was indeed using the real images and not DIRE images. How do I inference on real images? I would have expected the reconstruction of the real image and computation of the DIRE image to be part of general inference? Thanks for the clarifications.
The code is hard to understand that its a 2 step process. Testing the code on a set of images is a 2 step process. I mentioned the steps in this issue.
Thanks for the guidance, I looked at #7 and the repo again. In https://github.com/ZhendongWang6/DIRE/tree/main/guided-diffusion and https://github.com/ZhendongWang6/DIRE/blob/main/guided-diffusion/model-card.md there is no face model, did you use imagenet model, if so which one? Otherwise is there a face specific model you have? Can you provide feedback on the first step computational requirements for a single image? (is a single GPU enough and how long is the computation?).
thanks for sharing your work, it is much appreciated.
You may use the wrong testing data. Our model input is
DIRE
but notimage
.
I used outputs in dire_test for tesing but the predictions are still all of 1.0000. The testing data is DiffusionForensics/images/test/imagenet/real.tar.gz.
If I use the dire results provided in DiffusionForensics/dire/test/imagenet/imagenet/real.tar.gz directly, the predictions are correct.
Seems like compute_dire.py did not give the correct dire results.
I am running simple tests with your data, something like
Every single run is returning 'Prob of being synthetic: 1.0000'
Can you explain?