Open Elon-VVV opened 1 year ago
In other words, could you please provide a detailed list of the 512 types of editing history?
Hello! We plan to make the training code available, but we don't have a release date for it yet.
As for the dataset, we are still checking for the licenses of the images for the distribution of the dataset used for Noiseprint++ training, given also that the website DPreview is shutting down. As soon as the dataset is ready for distribution we will provide it.
Meanwhile, you could find useful to explore the code of the previous version of Noiseprint, since it follows a similar methodology: https://grip-unina.github.io/noiseprint/
The 512 editing histories are a combination of the following:
list_scale = [8/8, 2/8, 3/8, 4/8, 5/8, 6/8, 7/8, 9/8]
list_adjust = [
( 0.0,1.0,1.0), # identity
( 0.0,1.0,0.8), # gamma
( 0.3,1.0,1.0), # brightness
( 0.0,0.7,1.0), # contrast
( 0.0,1.4,1.0), # contrast
(-0.3,1.0,1.0), # brightness
( 0.0,1.0,1.2), # gamma
( 0.0,0.7,1.2), # contrast & gamma
]
list_jpeg = [ 0, 90, 85, 80, 75, 70, 65, 60]
def cv2_adjust(img, factors):
beta, alpha, gamma = factors
lut = np.arange(0, 256) / 255.0
lut = (alpha*(lut-0.5) + beta + 0.5) ** gamma
lut = np.clip(255*lut, 0, 255).astype(np.uint8)
return cv2.LUT(np.array(img), lut)
def cv2_scale(img, scale):
dst = cv2.resize(img, None, fx = scale, fy = scale, interpolation = cv2.INTER_CUBIC)
return dst
def cv2_jpeg(img, quality):
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
is_success, buffer = cv2.imencode(".jpg", img,encode_param)
io_buf = BytesIO(buffer)
return cv2.imdecode(np.frombuffer(io_buf.getbuffer(), np.uint8), -1)
Thank you very much for your detailed reply. If I understand correctly, the sequence of the aforementioned three operations is: firstly adjustment, followed by rescaling, and lastly JPEG, is that correct? Your clarification is greatly appreciated.
The order is rescaling, adjustment, JPEG
Thank you once again for your help. I have been quite confused by the following problems:
Q.1 have been helpful in understanding Q.2 for me, that the same patches restriction is needed for copy-move detection by NoisePrint++. Regarding NoisePrints++. I understand that you've found it might not perform as well with images that have undergone double JPEG compression. In light of this, I'm wondering if it might be beneficial to extend the 512 editing history with additional double compression editing history, rather than play a simple JPEG transform after 512 editing history for each image during the contrastive learning?
Hello, your work on manipulation detection is very impressive, and I am highly interested in your paper. However, I am having some difficulty in understanding the training steps of the proposed NoisePrint++. I have encountered several obstacles in this regard. Would you happen to have any plans to publish the dataset and training code specifically for the proposed NoisePrint++?