A Novel Blind Digital Watermarking Based on SVD and Extreme Learning Machine

Modification of media and illegal production is a big problem now a days because of free availability of digital media. Protection and securing the digital data is a challenge. An Integer Wavelet transformation (IWt) domain based robust watermarking scheme with Singular Value Decomposition (SVD) and Extreme Learning Machine (ELM) have been proposed and tested on different images. In this proposed scheme, a watermark or logo is embedded in the IWt domain as ownership information with SVD and ELM is trained to learn the relationship between the original coefficient and the watermarked one. this trained ELM is used in the extraction process to extract the embedded logo from the image. Experimental results show that the proposed watermarking scheme is robust against various image attacks like Blurring, Noise, Cropping, Rotation, Sharpening etc. Performance analysis of proposed watermarking scheme is measured with Peak Signal to Noise Ratio (PSNR) and Bit Error Rate (BER) .


INTRODUCTION
With the invention and expansion of internet, data in digital form is distributed and copied easily worldwide.But with the distribution, protection or security of data is equally important.Watermarking is an emerging technique for the security of data.Watermarking is the process that embeds data called a watermark, tag or label into a multimedia object such that watermark can be detected or extracted later to prove the ownership 1 .Its applications include broadcast monitoring, data authentication, protection of ownership etc 1.Over the past years, many singular value decomposition (SVD) based watermarking schemes are proposed 2,3,4 , in which three matrices are modified slightly to embed the watermark.Later, all these SVD based watermarking algorithms are extended to embed the watermark in wavelet domains to provide better robustness 5 .We are proposing a method with the combination of SVD and Extreme Learning Machine (ELM) in Integer Wavelet Domain (IWt).ELM is an algorithm for single layer feed forward neural network, where parameters of neural network like weights and bias are randomly selected.training time of ELM is very fast since weights and bias are not adjusted by using gradient descent method 6 .Gradient descent method have the problem of slow learning rate, local minima etc. IWt domain reduces the signal loss during the inverse process.
Rest of the paper is organized as follows.We give some background theories about IWt, SVD and ELM in Section 2. Proposed water marking schemes watermark embedding, ELM training and watermark extraction are described in Section 3.
Experimental results are discussed in Section 4 followed by conclusion in Section 5.

Literature Surcey Of Iwt, Svd And Elm Integer Wavelet Transform (IWT)
to increase the robustness, watermarks are to be embedded in wavelet domain instead of spatial domain.the image is divided into low and high resolution bands (LL,HL,LH,HH).In discrete wavelet transform, we hide data into floating point coefficients, so during the inverse transformation, any truncation in floating point value leads to the loss of information.IWt transforms a data set into another integer data set 7 .So during forward and inverse transformation, no loss of information is there which leads to have a very close copy of original image 8 .Lifting schemes are used to perform IWt.IWt process is divided into three steps 9 .

1.
Split: Partition the data set b j into low and high frequency samples.Split (b j ) = {odd (b j ), even (b j ) } = {g j ,g j } 2.
Predict: Predict the odd elements g j from the even elements l j 3.
Update: Update the data in the set with the data in the set.

Singular Value Decomposition (SVD)
In linear algebra, SVD is a technique to factorize a rectangular matrixinto three decomposition matrices I m×n =U m×m D m×n V t

Extreme Learning Machine (ELM)
ELM algorithm is proposed by Huang et al. 12 which is based on SLFNs.ELM overcomes the pitfalls of Artificial Neural Network like local minima and slow learning rate.Consider a SLFN with N input layer, M output layer with L hidden neuron.take N samples where is vector and is M output vector.Randomly select two parameters, bias and weights where is the weight vector of the connection between i th input layer and k th hidden neuron of input layer.the output function T with activation function is given by ... (5)   Where is the weight vector connecting the i th hidden node and the output nodes, b i is the threshold value of i th hidden nodes.For additive hidden nodes the activation function is defined as ... (6)   Where denotes the inner product of vector and x in .For an RBF hidden node the activation function g(x)is given by ... (7)   Where w i and b i are the center and the impact factor of the i th RBF node.Equaion (4) can be re-written as Where H is the hidden layer output matrix and T is the target vector.are estimated as b = H † t ... (9)   Where is the Moore-Penrose generalized pseudo inverse 13 .

Essence of ELM
the basic essence of ELM is that: 1) No iterative tuning is required in SLFN,, , parameters of hidden layer are randomly chosen 12 , 14 .

3)
Least square method is used to calculate between the hidden layer and the output layer 16,6 .

Algorithm
ELM algorithm is as follows: For N training samples (x i ,t i )∈ R N × R M , L number of hidden neurons and g(x) as an activation function 1) Input weight W i and bias b i are randomly generated, where i=1,...,N 2) Calculate H, hidden neuron output matrix.Let us assume there is a host imageH, of size and a binary watermark image w with size .As it is a binary image so .In our case, size of image His and watermark wis. the embedding algorithm is as follows: 1.
Host imageH is transferred through 1-level IWT to decomposed it into (LL, LH, HL, HH) sub-bands.

2.
Out of these four sub-bands (LL, LH, HL,HH), LL (Lowest level) has been selected for watermark embedding as it contains maximum energy.NowLL ij sub-band is partitioned into 4 × 4 non-overlapping coefficient blocks.

3.
Apply SVD on each blocks to get three components, U ij, D ij and Perform inverse SVD to get the modified coefficient block.Apply IWt on watermarked image H' to get LL' sub-band.
By using well trained ELM, get the predicted label label ij ' corresponding to each D' ij .

5.
Watermark can be extracted by using the predicted label as

Experimental Results For Robustness Of The Proposed Watermarking Scheme
In this paper, an experiment is performed on host images like Lena, Baboon, Pepper, Elaine and Jet of size 512×512 and a watermark logo of size 32×32 is used.the value of ∝ is taken as 0.3 Performance of watermarking algorithm is done on the basis of two parameters imperceptibility and robustness.PSNR is used to measure the quality of watermarked image with the original host image.Higher the value of PSNR, better is the quality of watermarked images.PSNR between the original image and watermarked image H and H' is 17 .
... (13)  ... (14)   and BER is evaluated as ... (15)   where w t is original watermark and w' t is the extracted watermark.p×q is the size of

CONCLUSIONS
In this paper, we proposed a novel combination of IWt, SVD and ELM for authentication of ownership.In the proposed scheme, host image is transformed in IWt domain and then LL subband is used to take the singular values, where required numerical operations are done to embed the watermark.Watermark extraction is a two-step process, firstly training of ELM and secondly the actual watermark extraction for proof of ownership.As shown in experimental results, our proposed method is robust against various attacks and the extracted watermark, to prove the ownership is very much similar to the original watermark, i,e less BER value.

2 )
By multiply U,D,V t , we can get the matrix Iback....(3)   the diagonal entries d 1 ,d 2 ,…,d n in diagonal matrix D are called the singular values.they are related with the image luminance while the U and V the horizontal and vertical details of image determine the "geometry" of the image10 .SVD is a popular method for image watermarking algorithm as singular values are robust against various common image processing operations and geometric transformations like scaling, rotation, translation etc11.

3 )
Calculate b, output weight using equation b = H † t Proposed Watermarking Scheme this algorithm consists of three parts: Water mark embedding, ELM training and Watermark extraction Watermark Embedding the block diagram of watermark embedding is shown in Fig(1).

Fig. 1 :
Fig.1: Proposed Watermark embedding process Fig.2.ELM Training Process ij ,label ij ),k=1,..6 and i≤N and j≤N 6. train the ELM with the training set t k ij .Watermark Extraction the block diagram to extract the watermark is shown in fig (3) Fig. 3: Proposed Watermark Extraction process the process is as follows: 1.