Watermarking & Security
This page aims to present our research and results on "digital watermarking" at the IRCCyN-lab. Within the IVC team, we mostly focus on the perception aspects of digital watermarking. For a given algorithm our main goal is to accurately track the visibility threshold, and hence jointly optimize the visibility and robustness requirements.
Our lab is equiped with standardized subjective test rooms. Within the IVC team, we regularly conduct subjective experiments on watermarking or selective encryption algorithms (still images or videos). Our subjective setup strictly follows the VQEG and ITU recommandations. We also test various Objective Quality Metrics for watermarking applications, and moreover, some quality metrics have been designed with digital watermarking applications in mind.
The goal of this page is to distribute various resources on image / video watermarking and subjective protocols.
Hopefully, this code should work with Matlab.
A brief explanation :
The function is as follows : MarkedImg = Octave_WavJND(Eqn_coef, level, orientation, filename, a_val, b_val)
* "Eqn_coef" is either 0.5, 1, or 2 it corresponds to the power parameter (c) in : y = x + (a.|x|.^c + b). w * "level" and "orientation" are wavelet sub-bands positions (only horizontal and vertical sub-bands have been tested during the subjective experiment and with a 3 level wavelet decomp) * "filename" is the image name (without any extension) * "a_val" & "b_val" are the strength parameters in the equation above. - If "a_val" & "b_val" are omitted, the code will grab the coefficients provided by the observers (in files "a_and_b_...txt" - If, for the given input image, no "a_val" & "b_val" are available, the user will be asked for "a" & "b" values. - If "a_val" & "b_val" are both set to 0, a polynomial fitting ( Levenberg-Marquardt nonlinear regression) will be performed based on the observers’ optimal curves.
Type "help Octave_WavJND" in octave for further details.
A few examples :
MarkedLighthouse_3_3 = Octave_WavJND(0.5, 3, 3, ’lighthouse’) ;
Will embed a watermark in SB [3,3] for the "lighthouse" image, using the sqrt equation ("a" and "b" will be grabbed from the file "a_and_b_sqrt.txt").
MarkedMotorbikes_1_3 = Octave_WavJND(2, 1, 3, ’motorbikes’) ;
Will embed a watermark in SB [1,3] for the "motorbikes" image, using the ^2 equation (as "a" and "b" are not in the file "a_and_b_pow.txt", the user will be asked to give some values - give it a try with .2 and .2-).
MarkedCapsOptim_2_1 = Octave_WavJND(x, 2, 1, ’caps’,0,0) ;
Will embed a watermark in SB [2,1] for the "caps image, using a least square polynomial fitting to model the three different equations. You’ll have to set "DispPlots" to 1 in "Octave_WavJND.m" if you want to have the plots displayed.
More details (and C code) are provided on this page.
This H.264 video watermarking demo is available right here.
You can download a DSIS subjective experiment (Double Stimulus Impairment Scale) right here. The original and distorted images are successively displayed in the center of the screen for about 5 seconds each, and then, the observer has to vote, using the F1 to F5 keys.
The source code for a "Pair comparison" experiment is there. This experiment basically displays two images side by side on a monitor, the reference image location is known (left hand side of the sceen), and the distorted image is displayed on the right hand side. The observers are asked to evaluate the quality of the distorted image using the F1 to F5 keys on the keyboard (please read the README.txt file included in the zip archive for more details).
Some source code for a "2AFC" experiment is there. 2AFC stands for "Two Alternative Forced Choice", basically, an original image is displayed on the upper part of the screen, and two images are displayed in the lower half. Among these two images, one is the original, and oneis a distorted version. The observer is asked to select the image he believe is the distorted image. The selection is done with the right/left arrows (pointing towards the distorted image).
DWT_vs_DTCWT Subjective comparison between DWT and DTCWT
120 distorted images were generated from 12 original grayscale images and 5 embedding strengths either in the wavelet domain (DWT) or using the Dual Tree Complex Wavelet transform (DT-CWT). The DSIS method is used for 14 observers to have subjectives scores of the images. Read more.
H264_Watermarking Structure preserving H.264 watermarking
The goal of this database is to evaluate the quality of a H.264/CAVLC video watermarking method. One of the main objectives was to determine is the video suffered any quality loss when watermarkis were embedded (compared to the coding only scenario. Read more.
Perceptually_Optimized_Watermarking Track the visibility threshold of an image watermarking method.
9 original color images were used. 243 distorted images were generated with 3 different embedding algorithms for each image, each algorithm and 9 embedding strengths. On 37 observers, the Two Alternative Forced Choice method is used to have subjectives scores of the images. Read more.
Selective_Encryption Subjective evaluation (Pair Comparison) of 5 different encryption techniques
8 original color images were used, 200 distorted images were generated from 5 different encryption techniques with 5 parameters. Subjective evaluations were made using a Pair Comparison method with 21 observers. Read more.
Watermarking_Quality_Benchmark 10 watermarking algorithms with theirs subjective evaluations (DSIS)
5 original grayscale images were used, 100 distorted images were generated from 10 watermarking algorithms with 2 embedding strengths. Subjective evaluations were made using a DSIS method with 16 observers. Read more.