Open Codefmeister opened 3 years ago
1, we transfer input image from bgr to gray, so the input channel is 1.
and 2, according to Constrained convolutional neural networks: A new approach towards general purpose image manipulation detection , the center element of constrained-conv (aka, BayarConv) is constrained to be -1, thus only kernel**2-1 parameters is trainable.
That is how we get [1, 3, 24].
Implement details of these modules will be included in futher released code.
I use constrained convolution to get a different picture after my own training from the weight file constrained convolution. May I ask what is the reason for this?
I use constrained convolution to get a different picture after my own training from the weight file constrained convolution. May I ask what is the reason for this?
Sorry I didn't get it. Could you explain how you get the two pictures concretely?Are they from the same original image?
The weights obtained by my own training and your pre-trained weights are loaded into the model, and the constrained convolution output is obtained after debugging. The obtained images are shown below, which are different. . yours: mine:
I use constrained convolution to get a different picture after my own training from the weight file constrained convolution. May I ask what is the reason for this?
Sorry I didn't get it. Could you explain how you get the two pictures concretely?Are they from the same original image?
I can not figure it out either. The constrained cnn is used to extract the noise pattern inconsistencies, the visualization of it is more likely used to show "well there do have sth strange comparing with context, and such clue may be further mined via CNN".
One possible reason is that, the output of constrained cnn is not restricted between 0 and 1(or 255), and the matplotlib tend to self-adjust color-map according to the distribution of input values and data type. So I suggust that you may use a sigmoid to observe the relative difference, but not the absolute values. (BTW, the color map of constrained cnn's visualization in CONSTRAINED R-CNN: A GENERAL IMAGE MANIPULATION DETECTION MODEL is also different from both of us.
It's just my conjecture, hope it helps.
Thank you for your answer, I will try it. Actually, I was thinking about whether the method of data enhancement will make the constrained convolution training produce a relatively large difference in the image, if possible, can you release the code of training data enhancement so that I can have a try, finally I really appreciate the patience to answer these questions
I can not figure it out either. The constrained cnn is used to extract the noise pattern inconsistencies, the visualization of it is more likely used to show "well there do have sth strange comparing with context, and such clue may be further mined via CNN".
One possible reason is that, the output of constrained cnn is not restricted between 0 and 1(or 255), and the matplotlib tend to self-adjust color-map according to the distribution of input values and data type. So I suggust that you may use a sigmoid to observe the relative difference, but not the absolute values. (BTW, the color map of constrained cnn's visualization in CONSTRAINED R-CNN: A GENERAL IMAGE MANIPULATION DETECTION MODEL is also different from both of us.
It's just my conjecture, hope it helps.
Hi! Sorry to disturb you again. I download the pretrain model from your link, and I found that the constraint conv layer only have one param and it's size is [1, 3, 24]. But I thought that it would be [3, 1, 5, 5] according to your figure. Could you plz figure out where the wrong lied in? Thanks sincerely.
15:32 Add Does it mean: the center pixel of the conv kernel is always -1. And you implemented it by a array, which is further filled into the kernel matrix?