Pixel based Medical Image Fusion Techniques using Discrete Wavelet Transform and Stationary Wavelet Transform

Background: Process of combining significant information from 2 or more images to get an output without any artifact or information loss. Also to perceive all information required for accurate diagnosis and to get high spatial resolution with functional and anatomical information. Methods: DWT and SWT are used with biorthogonal 2.6 and db1. Average fusion rule is applied for fusing low frequency coefficients and for high frequency coefficients region energy rule is applied. Results: Eight sets of real time medical images are used for the analysis. Comparing the fusion of SWT with DWT, Stationary wavelet transform method performs well than the Discrete wavelet transform as the information loss occurs due to down sampling in each of the DWT sub bands which caused in the relevant sub bands are minimized by SWT. Application: Easy to diagnose for a physician in the field of biomedical.


Introduction
The process of combining together relevant information or some of their features into a single image is termed as image fusion.The significance is to merge complementary information from two or more images of the same scene or part, so as to obtain an image which is more suitable for human visualization and machine perception or for further image processing and analytical tasks.A single mode of medical image cannot give accurate and comprehensive information and hence the main focus is the medical image fusion.The data acquired might give either better functional information (such as PET) or higher spatial resolution (such as MRI) 1 .The MRI image depicts the brain tissue anatomy with good spatial resolution and without any functional information; CT is used to find tumor and anatomical detection, SPECT provides functional and metabolic information whereas PET image shows good functional information of brain function and has a low spatial resolution etc.Hence, the image fusion task is carried out in order to enhance the spatial resolution of the functional images by combining them with a high-resolution anatomic image.
Of late, image fusion has emerged as a promising research field 2 and it is used in many applications, such as object detection, computer vision, automatic target recognition, battlefield surveillance, remote sensing and smart buildings.Further it is also used in robotics, guidance and control of autonomous vehicles, monitoring of complex machinery, meteorological imaging, military applications and in medical diagnosis.
With rapid advance development in technology, it is possible to obtain information from multi source images to produce a high quality fused image with spectral and spatial information.The process of image fusion can be classified into 1.Region based 3 , 2. Pixel based and 3. Decision based 5 and in this paper pixel based image fusion is used.Most image fusion applications adopt pixel-based methods as the images contain all Vol 8 (26) | October 2015 | www.indjst.orgoriginal information.Further, the algorithms are rather easy to perform and also it is time efficient.Generally, pixel-level techniques can be grouped into spatial and transform domain techniques.Some researches proposed region-based fusion methods which groups image pixels to form continuous regions as pixel based fusion are sensitive to noise.The probabilistic method incorporate large amount of floating point computation to select an appropriate fusion numerical operator among the three methods and it consumes more time and memory for a noted problem 4,5

Wavelet Transform
One of the most common forms of transform type image fusion is the wavelet transform fusion.It provides a framework in which an image is decomposed into a series of coarse-resolution sub-bands and multilevel finer-resolution sub-bands 6 .Li et al. 7 has proposed a traditional discrete wavelet-based image fusion.However, this traditional wavelet transform has a shiftvariant drawback, that leads Rockinger 8 to employ a shift-invariant wavelet transform.Li et al. 9 proposed a method based on redundant wavelet transform to conquer the shift-variant problem and to increase the reliability of fusion results.Also, Singh et al. 10 used the same technology to fuse the medical images from different image sources.In order to get optimum fusion results, various wavelet-based fusion schemes have been tested by many researchers.Because of its simplicity and ability to preserve the frequency details and time of an image, wavelet transform is most commonly used in the field of image fusion.It is more formally defined by considering the wavelet transform ra of registered input images I1 (x, y) and I2 (x, y) together with fusion rule ϕ as given in the equation 1 below.The inverse wavelet transform ω -1 is computed and fused image I (x, y) is reconstructed finally.I (x, y) = ω -1(ϕ(ω(I1(x, y)), ω(I2(x, y)))) (1)

Discrete Wavelet Transform
The discrete two-dimensional wavelet transform is computed by the recursive application of low pass and high pass filters in each direction of the input image (i.e.rows and columns) followed by sub sampling.It is a transformation where the wavelets are discretely sampled and the spatial resolution is small in low frequency bands but large in high frequency bands.In image fusion, coefficients of the same level are to be fused and fused multi scale coefficients can be obtained 11 .1-D wavelet decomposition can be easily extended to 2-D by introducing separable 2-D scaling and wavelet functions as the tensor products of their 1-D complements 12 and hence the equation is, Ψ HL (x, y) = Ψ(x) ϕ(y) (4) From the two values, low and high we obtain two sets of pixel-values and these pixel values can be further divided into LL, LH, HL and HH.Finally the low and high frequency coefficients are fused to get the resultant fused output.As DWT suffers from certain disadvantages like loss of edge information due to down-sampling, blurring effect and high storage cost etc., in order to overcome these disadvantages, stationary wavelet transform (SWT) technique has been proposed in this paper.

Stationary Wavelet Transform
The Discrete Wavelet Transform is not a time invariant one and hence the way to restore the translation invariance is to average some slightly different DWT, called Undecimated WT, that is, to define the Stationary Wavelet Transform (SWT).This algorithm is very simple and it is close to the DWT and it was presented in 13 which time is invariant.A clear description and analysis of this SWT method is presented in 14 .The property of time invariance is desired in many applications such as Change detection, De-noising, Pattern recognition, etc., and to overcome the limitations of the traditional wavelet transform, a multilayer Stationary Wavelet Transform (SWT) is given in our approach.SWT decomposition of a two dimensional image is given in Figure 1.  that is, to define the Stationary Wavelet Transform (SWT).This algorithm is very simple and it is close to the DWT and it was presented in 13 which time is invariant.A clear description and analysis of this SWT method is presented in 14 .The property of time invariance is desired in many applications such as Change detection, De-noising, Pattern recognition, etc., and to overcome the limitations of the traditional wavelet transform, a multi-layer Stationary Wavelet Transform (SWT) is given in our approach.SWT decomposition of a two dimensional image is given in Figure 1.In Figure 1 Lj and Hj represent low pass and high pass filters at scale j, resulting j from interleaved zero padding of filters H J-1 and L J-1 where ( j >1).Here LLo is the original image and the output of the scale j, LLj is the input of scale j+1.LLj+1 denotes the low frequency (LF) estimation after the stationary wavelet decomposition, while LH j+ 1, HL j+1 and HH j+1 denotes high frequency (HF) detailed information along the horizontal, vertical and diagonal directions, respectively 15 .These sub-band images are having the same size as that of original image, because no down sampling is performed during the wavelet transformation.It is achieved by suppressing the down-sampling step of the decimated algorithm and instead up-sampling the filters are done by inserting zeros between the filter coefficients.
In Figure 1 Lj and Hj represent low pass and high pass filters at scale j, resultingj from interleaved zero padding of filters H J-1 and L J-1 where (j >1).Here LLo is the original image and the output of the scale j, LLj is the input of scale j+1.LLj+1 denotes the low frequency (LF) estimation after the stationary wavelet decomposition, while LH j+ 1, HL j+1 and HH j+1 denotes high frequency (HF) detailed information along the horizontal, vertical and diagonal directions, respectively15.These sub-band images are having the same size as that of original image, because no down sampling is performed during the wavelet transformation.It is achieved by suppressing the down-sampling step of the decimated algorithm and instead up-sampling the filters are done by inserting zeros between the filter coefficients.
Figure 2 is the block diagram of multi scale decomposition and local texture analysis based multi-modality DWT/SWT medical image fusion.Medical images have both low frequency components and high frequency components.For many signals, the low-frequency content is the significant one as it is the identity of the signal and also known as visible components.Initially the images are decomposed into low frequency and high frequency components using multi-scale decomposition technique.After decomposition low frequency components are fused by averaging method and high frequency components are fused by maximization method.To reconstruct the original images inverse transform is used.
The high-frequency content gives edge details of the signal.The approximations and detail coefficients are obtained in wavelet analysis, after filtering.The approximations are high-scale and low frequency components whereas details are low-scale and high frequency components.
The Daubechies wavelet are a family of orthonormal compactly promoted scaling and wavelet functions.This high frequency (HF) detailed information along the horizontal, vertical and diagonal directio respectively 15 .These sub-band images are having the same size as that of original image, because no down sampling is performed during the wavelet transformation.It is achieved by suppressing the down-sampling step of the decimated algorithm and instead up-sampling the filters are done by inserting zeros between the filter coefficients.
has maximum regularity for a given length of the defend of quadrature mirror filters and used mainly for denoising applications.
It uses overlapping windows and hence the high frequency coefficient spectrum portray all high frequency changes.Consequently these Daubechies wavelets are effective in compression and noise removal of audio signal processing and also encode polynomials with two coefficients.This ability to encode signals are subjected to the phenomenon of scale leakage and the lack of shift-invariance, that originate from the discrete shifting operation in application of the transform.

Biorthogonal Wavelet
A biorthogonal wavelet is one where the associated wavelet transform is invertible, however it is not necessarily orthogonal.Designing biorthogonal wavelets allow more degrees of freedom than orthogonal wavelets and one additional degree of freedom is the possibility to construct symmetric wavelet functions.It is given by 11 Then the wavelet sequences can be given as

Fusion of Low Frequency Coefficients
Taking the input images into account, the approximate information is constructed by the low-frequency coefficients, i.e., average rule is chosen for low-frequency coefficients.Suppose BF (x, y) is the fused low-frequency coefficients, then it is given by, Where B1 (x, y) and B2 (x, y) denote the low-frequency coefficients of source images.

Fusion of High Frequency Coefficients.
High frequency coefficients always incorporate edge and texture features.Region energy is defined by computing the sum of the coefficients square in the local window.On considering C l k (x, y) is the high-frequency coefficients whose location is (x, y) in the sub band of k th direction at l st decomposition level, the region energy is characterized as follows,

Algorithm
Step 1: Read the two input images, image I and image II to be fused.

Objective Evaluation Metrics
Several computational image fusion quality assessment metrics are proposed in recent years 16,17 .Although subjective visual evaluation can be used to give instinctive comparisons, it cannot be ignored as so many factors such as level of eye sight, mental state, even the mood may influence the subjective results 18

Root Mean Square Error
Root mean square error is one of the objective quality metric that is widely used and despite their well-known limitations if used carefully, they can be helpful 19 .It is given by,

Peak Signal to Noise Ratio
To determine the quality of an image, human eyes perception is the fastest approach but the results may differ from person to person.To find an objective criterion for digital image quality, a parameter named PSNR (Peak Signal to Noise Ratio) is defined as follows, PSNR = 10 log 10 (255 * 255/MSE) (13)   where MSE stands for the Mean-Square Error.When the PSNR is larger, the image quality is higher.On the other hand, a smaller value of PSNR means there is great distortion between the input and the fused image.

Entropy
Information Entropy (IE) reflects the amount of information in fused output image 20,21 .It is useful to determine the significant information of the image based on the probability of pixel values and is given by the equation, (14)   where P(i) is the probability of pixel.If the entropy value is becoming higher after fusion, the information quality will increase.

Correlation Coefficient
It gives similarity in the small structures between the original and reconstructed images where higher value of correlation means more information is preserved and lies between (0,1) and is given by, Where F idea i is ideal image and F fused is fused image.The fusion results of PET and CT images, performance measures, analysis of DWT and SWT and the results thereon are given below.In Table 1, performance measures of PET, CT are given below.
In Figure 4, it shows graphical representation of root mean square error, peak signal to noise ratio, correlation coefficient, and entropy are given.Here X axis denotes eight sets of images, whereas in Y axis numerical values are given for the corresponding performance measures.
Place below pictures in this fashion i.e next to each other In Table 1, performance measures of PET, CT are given below.
In Figure 4, it shows graphical representation of root mean square error, peak signal to noise ratio, correlation coefficient, and entropy are given.Here X axis denotes eight sets of images, whereas in Y axis numerical values are given for the corresponding performance measures.

Result and Discussion
In this paper, fusion method of Discrete wavelet transform and Stationary wavelet transform are proposed and have been tested on 8 sets of CT and PET image pairs obtained from Bharat Scans.The results are compared with four objective methods using DWT and Vol 8 (26) | October 2015 | www.indjst.orgSWT.Initially the source images are decomposed by the Discrete wavelet transform and the Stationary wavelet transform.Subsequently their low frequency coefficients are fused with average fusion rule and high frequency by region energy method.The final fused image is obtained by taking inverse transform.The wavelet basis is 'Daubechies 1' and 'Biorthogonal 2.6' and the decomposition level is two.Experimental results show that the Stationary wavelet transform method performs well than the Discrete wavelet transform as the information loss occurs due to down sampling in each of the DWT subbands which caused in the relevant sub bands are minimized by SWT.

Conclusion
Different number of decomposition levels for wavelet families ie., bior (1.1, 1.3, 2.2) and db (3, 6) are also used for qualitative and quantitative measurement.The results are good for bior 2.6 and db1.Visual analyses show that stationary wavelet transform method appear better than discrete wavelet transform.After validating experimental result using entropy, correlation coefficient, root mean square error, peak signal to noise ratio the statistical tools shows that the stationary wavelet transform is superior than discrete wavelet transform.It has been observed that short filter performs well than long filter.

Step 2 :
Perform independent multi scale decomposition of the two images to acquire approximation (LL) and detail (LH, HL, and HH) coefficients.Step 3: Apply pixel based algorithm for approximations that involve calculating the average value of pixels of source images I and II.Step 4: Concatenation of fused approximations and particulars give the new coefficient matrix.Step 5: Apply Inverse Discrete Wavelet Transform/ Stationary Wavelet Transform to rebuild the original fused image.Step 6: Display the fused output and evaluate performance metrics.

6 Figure 3 .
Figure 3. Fusion of Pet and CT Images.

Figure 4 .
Figure 4. Plot Analysis of DWT and SWT.
image A1-H1: CT images Input image A2-H2: PET images Fused images obtained: (A3-H3) by DWT db 1 Fused images obtained: (A4-H4) by DWT bior 2.6 Fused images obtained: (A5-H5) by SWT db 1 Fused images obtained: (A6-H6) by SWT bior 2.6 Root mean square error (left hand side) and Peak signal to noise ratio (right hand side) Correlation coefficient (left hand side) and Entropy (right hand side) . Amidst the transform domain techniques, multiscale transforms are the most frequently used method where image fusion is performed on a number of different scales and orientations.Discrete Wavelet Transform, Undecimated Wavelet Transform, Pyramid Transform, Dual-Tree Complex Wavelet Transform, Curvelet Transform, Contourlet transform are the multi scale transforms usually employed. ,

Table 1 .
Performance Measures of PET and CT Images