ShieldMnt / invisible-watermark

python library for invisible image watermark (blind image watermark)
MIT License
1.56k stars 140 forks source link
blind-watermark image-watermark

invisible-watermark

PyPI License Python Platform Downloads

invisible-watermark is a python library and command line tool for creating invisible watermark over image (a.k.a. blink image watermark, digital image watermark). The algorithm doesn't rely on the original image.

Note that this library is still experimental and it doesn't support GPU acceleration, carefully deploy it on the production environment. The default method dwtDCT(one variant of frequency methods) is ready for on-the-fly embedding, the other methods are too slow on a CPU only environment.

supported algorithms

speed

accuracy

Supported Algorithms

  • dwtDct: DWT + DCT transform, embed watermark bit into max non-trivial coefficient of block dct coefficents

  • dwtDctSvd: DWT + DCT transform, SVD decomposition of each block, embed watermark bit into singular value decomposition

background:

How to install

pip install invisible-watermark

Library API

Embed watermark

import cv2
from imwatermark import WatermarkEncoder

bgr = cv2.imread('test.png')
wm = 'test'

encoder = WatermarkEncoder()
encoder.set_watermark('bytes', wm.encode('utf-8'))
bgr_encoded = encoder.encode(bgr, 'dwtDct')

cv2.imwrite('test_wm.png', bgr_encoded)

Decode watermark

import cv2
from imwatermark import WatermarkDecoder

bgr = cv2.imread('test_wm.png')

decoder = WatermarkDecoder('bytes', 32)
watermark = decoder.decode(bgr, 'dwtDct')
print(watermark.decode('utf-8'))

CLI Usage

embed watermark:  ./invisible-watermark -v -a encode -t bytes -m dwtDct -w 'hello' -o ./test_vectors/wm.png ./test_vectors/original.jpg

decode watermark: ./invisible-watermark -v -a decode -t bytes -m dwtDct -l 40 ./test_vectors/wm.png

positional arguments:
  input                 The path of input

optional arguments:
  -h, --help            show this help message and exit
  -a ACTION, --action ACTION
                        encode|decode (default: None)
  -t TYPE, --type TYPE  bytes|b16|bits|uuid|ipv4 (default: bits)
  -m METHOD, --method METHOD
                        dwtDct|dwtDctSvd|rivaGan (default: maxDct)
  -w WATERMARK, --watermark WATERMARK
                        embedded string (default: )
  -l LENGTH, --length LENGTH
                        watermark bits length, required for bytes|b16|bits
                        watermark (default: 0)
  -o OUTPUT, --output OUTPUT
                        The path of output (default: None)
  -v, --verbose         print info (default: False)

Test Result

For better doc reading, we compress all images in this page, but the test is taken on 1920x1080 original image.

Methods are not robust to resize or aspect ratio changed crop but robust to noise, color filter, brightness and jpg compress.

rivaGan outperforms the default method on crop attack.

only default method is ready for on-the-fly embedding.

Input

  • Input Image: 1960x1080 Image
  • Watermark:
    • For freq method, we use 64bits, string expression "qingquan"
    • For RivaGan method, we use 32bits, string expression "qing"
  • Parameters: only take U frame to keep image quality, scale=36

Attack Performance

Watermarked Image

wm

Attacks Image Freq Method RivaGan
JPG Compress wm_jpg Pass Pass
Noise wm_noise Pass Pass
Brightness wm_darken Pass Pass
Overlay wm_overlay Pass Pass
Mask wm_mask_large Pass Pass
crop 7x5 wm_crop_7x5 Fail Pass
Resize 50% wm_resize_half Fail Fail
Rotate 30 degress wm_rotate Fail Fail

Running Speed (CPU Only)

Image Method Encoding Decoding
1920x1080 dwtDct 300-350ms 150ms-200ms
1920x1080 dwtDctSvd 1500ms-2s ~1s
1920x1080 rivaGan ~5s 4-5s
600x600 dwtDct 70ms 60ms
600x600 dwtDctSvd 185ms 320ms
600x600 rivaGan 1s 600ms

RivaGAN Experimental

Further, We will deliver the 64bit rivaGan model and test the performance on GPU environment.

Detail: https://github.com/DAI-Lab/RivaGAN

Zhang, Kevin Alex and Xu, Lei and Cuesta-Infante, Alfredo and Veeramachaneni, Kalyan. Robust Invisible Video Watermarking with Attention. MIT EECS, September 2019.[PDF]