Hybrid Fusion Approach for Alzheimer’s Disease Progression Employing IHS and Wavelet Transform

 Abstract — Image fusion has become a commonly utilized technology for boosting the medical information in brain images. Magnetic resonance imaging (MRI) depicts the morphology of the brain tissue, it has great spatial resolution but lacks functional information. Positron emission tomography (PET) displays the brain with great function but low spatial resolution. Hence, a fusion of the two imaging techniques will help the neurologist to accurately identify Alzheimer's disease progression. In this paper, a new fusion method that combines two transformation approaches, triangular intensity-hue-saturation (IHS) and discrete wavelet transform (DWT), is introduced. DWT is applied to the intensity component of the PET image and the smoothed version of the MRI image. Wavelet coefficients are fused using a specific fusion rule for the low and high-frequency bands. Inverse DWT is applied to obtain a new intensity component, and the gray version is subtracted from the new intensity. The fused image is obtained by applying the inverse triangular IHS. For evaluation, quantitative measurement and statistical analysis are determined. The proposed method achieved discrepancy, average gradient, mutual information, and overall fusion performance of 7.0529, 5.3879, 0.6550, and 1.6651 respectively. The final results reveal that the proposed method achieved the highest performance compared with existing methods.

Hybrid Fusion Approach for Alzheimer's Disease Progression Employing IHS and Wavelet Transform Doaa Y. Hussein, Mostafa Y. Makkey, and Shimaa A. Abdelrahman I. INTRODUCTION MAGE fusion is an approach that combines information from two imaging techniques into a single fused image [1].In medical applications, it provides a very promising diagnostic tool for a variety of diseases.Medical images come in different forms, and each has a particular use.High-resolution anatomical information image is produced by magnetic resonance imaging (MRI), and computed tomography (CT).Functional imaging techniques are available such as positron emission tomography (PET), but this technique has fewer anatomical details and low resolution.
To create an image that is more informational and better suited for diagnosis, information from two forms was combined by image fusion [2].For Alzheimer's disease, MRI and PET are two powerful imaging techniques that provide complementary information about the brain.PET images can tell information about brain function while MRI images show information about the internal structural shape of the brain.
IHS and retina-inspired model (RIM) were integrated to improve the functional and spatial information content [3].Images were decomposed using non-subsampled contourlet transform (NSCT), and the resultant two images were combined using different fusion rules in [4].This method employed a maximal energy rule to combine low-frequency band coefficients, and a maximal variance rule to combine high-frequency band coefficients.Features were extracted from PET and MRI images using a convolutional neural network [5], and the resultant weights were employed to construct a fused image.An advanced wavelet transformbased method was introduced in [6] that employed morphological processing with PCA.Discrete wavelet transform (DWT) based methods were presented to obtain the fused image in [7][8].
Existing fusion techniques [2][3][4][5][6][7][8] are studied in this paper; including; pixel average, IHS cylindrical model, Brovey, DWT, and à-trous wavelet transform.The study reveals that; some of these methods provide a high spatial intensity fused image but they reduce the correlation between the original image and the fused one.Additionally, the fused image loses some important spectral color information and has an inaccurate color representation, artifacts, and noise.Hence, a hybrid method employing IHS and wavelet transform is proposed in this paper to improve the functional and spatial information content.IHS introduces a high spatial intensity and DWT minimizes the spectral distortion of the resultant image.The proposed method successfully preserves the original functional information with no spatial distortion compared with the existing methods.Statistical analysis and quantitative measurement of the fused image using mutual I information, discrepancy, average gradient, and overall performance are utilized for results evaluation.
The rest of this paper is organized as follows.Section II describes the IHS triangular model.DWT and the fusion rules will be introduced in section III.Section IV illustrates the utilized dataset to apply and evaluate the proposed method.Section V describes the methodology of the proposed hybrid IHS and DWT fusion approach.Section VI presents the results and evaluations.Finally, section VII concludes this paper.

II. IHS TRIANGULAR MODEL
The IHS triangular model [9][10][11][12] is a color space transformation that converts a red-green-blue (RGB) image into an IHS image as in shown Fig. 1.The PET image contains the intensity and the color information (hue and saturation).Hence, the IHS model is employed in the proposed method to separate the intensity information from the color information.This separation allows for the manipulation of the intensity channel independently of the color channels, which can be useful in image fusion.The intensity, hue, and saturation components and the inverse transformation of these components can be calculated as in (1-16), [2], [9].
Where R C , G C , and B C are the three color components red, green, and blue respectively, and (2) Where H C is the hue component, and S C is the saturation component.The range of I C , H C , and S C is from 0 to 1.If the red component has the minimum value (R C < G C and R C < B C ): (5) If the green component has the minimum value (G C < R C and G C < B C ):

) The inverse IHS transform is calculated as follows: If the blue component has the minimum value (B
If the green component has the minimum value (G C < R C and G C < B C ): (15) DWT-based image fusion approach [8] fuses MRI image and intensity component of PET.Fusion of the DWT coefficients is obtained by applying certain image fusion rules, including the maximum, minimum, average, and weighted average rules.These rules determine which coefficients to retain in the new intensity image based on their magnitudes.All of these fusion rules are studied and the final results reveal that the maximum and weighted average rules are the most appropriate ones to apply the proposed method.Prioritizing the detail coefficients with the highest absolute value is applied at each transformation scale.This is followed by a local morphological procedure, which confirms the chosen pixels through a filling and cleaning operation as shown in Fig. 2.This operation, either fills or eliminates isolated pixels locally to enhance the uniformity of coefficient selection, thereby minimizing distortion in the new intensity image.For our purpose, the shaded pixel is taken from the MRI image, and the white pixel is taken from the intensity of the PET image.The maximum level of DWT decomposition, denoted as L Decom , is contingent on the size of the input image, which can be expressed as in (17), [8].
(min( , )) min( , ) Where, the dimensions of the image are represented by M and N, while m o and n o denote the dimensions of the image transformed by DWT at the highest scale.The term 'min' is used to select the smallest value.

IV. DATASET
In this paper, the utilized dataset consists of 24 color PET images and 24 high-resolution MRI brain images that are registered together all images are downloaded from the Harvard University website [10].This dataset is divided into four categories: normal coronal, normal sagittal, normal transaxial, and Alzheimer's disease images.PET images are resized to 256 × 256 pixels to maintain uniform conditions of three RGB bands based on metabolic processes in the brain, while MRI images are high-resolution grayscale images.Fig. 3 displays a sample of the utilized dataset.The dataset is divided into four groups, dataset 1 for normal axial, dataset 2 for normal coronal, dataset 3 for normal sagittal, and dataset 4 for Alzheimer's disease brain images.

V. METHODOLOGY
The proposed approach is derived by implementing a DWT on the intensity pixel of the PET and the refined version of the MRI image to acquire the wavelet coefficients.These coefficients are fused using a distinct fusion rule for both low and high-frequency bands.An inverse DWT is performed, which is enhanced by subtracting the new intensity image from the MRI image.This step helps to highly improve spectral color information.Ultimately, the final image is produced after applying the inverse triangular IHS model to the new intensity components of the image along with the hue and saturation components of the PET image.The main steps of the proposed method are shown in Fig. 4.

A. Preprocessing
For accurate fusion, which consequently enhances the identification method of the progression of Alzheimer's disease.The primary region of interest in MRI and PET images is the medial temporal lobe, which contains the hippocampus and the entorhinal cortex.Therefore, a proposed preprocessing step is required to remove the outer framework (the bones and layers surrounding the brain) as shown in

B. Hybrid Fusion
A hybrid fusion method is proposed by combining IHS and DWT.DWT is applied to the preprocessed MRI image to obtain the low and high-frequency bands.On the other side, a resized PET image is converted from an RGB model to an HIS triangular model to get the three main IHS components, I, H, and S individually.The intensity component is also passed through wavelet transform to obtain the low and highfrequency bands.For different band combinations from MRI and PET, a weighted average fusion rule is applied to the lowfrequency band as illustrated in (18), [8].( Where CF represents the fused coefficients, C Intensity and C NEW MRI are low-frequency bands from the input images.The effect of the parameter a1 and a2 on the dataset has been studied.The results of the study reveal that, if a large weight is given to an MRI image, more spatial resolution will be preserved of the new intensity image.
On the other hand, if a large weight is specified to the intensity of the PET image, more spectral color information is obtained.Hence, two approximately equal weights are assigned to both images.Additionally, these values are more significant in Alzheimer's disease images than in normal brain images.The maximum selection is applied to the highfrequency band to evaluate the best result and an inverse discrete wavelet transform is applied to the new intensity image.After that, the inverse IHS triangular model is applied to the new intensity image, hue, and saturation components of the PET image.For evaluation, two criteria, statistical and visual analysis, are utilized to quantitatively measure the fusion performance.The proposed method is compared with the existing methods including; pixel average, IHS cylindrical model, Brovey, DWT, and à-trous wavelet transform as shown in Fig. 7.It is obvious that the proposed hybrid method has the least distorted color information and clear spatial details comparable to the existing fusion techniques.For statistical analysis, metrics including; average gradient, discrepancy, mutual information, and overall fusion performance [11] are determined.

A. Discrepancy
Discrepancy is an essential metric that can be used to assess the quality of fused images produced by image fusion algorithms.The discrepancy calculates the difference in the pixel value between the original images and the resultant fused as in (19), [3] Where D i is the discrepancy for the "i" color component (i=R C , G C , or B C ), N refers to the total number of pixels in the input images, F refers to the pixel values of the fused image, O represents the pixel values of the original images (PET or MRI).A lower discrepancy value indicates a better quality of the fused image, this means that the percentage of similarity between the two merged and input images is large.

B. Average Gradient
The average gradient indicates the quality of the fused image.It is calculated as the mean of the gradient magnitudes of the fused image.A higher average gradient value indicates sharper edges and better preservation of the spatial details in the fused image.The gradient magnitude can be computed using the gradient components in the x and y directions (G X and G Y ) as in (20 -26), [3], [11].
Where AG i refers to the average gradient of the fused image, G X is the average gradient in the "x" direction, and G Y is the average gradient in the "y" direction.G Y and G X are calculated using the Sobel operator as in (21 -26).
Where F (x, y) refers to the pixel value at position (x, y) in the fused image.

C. Mutual Information
Mutual information evaluates the quality of fused images, where it can evaluate the information that two images exchange with one another, such as PET and MRI images.A higher mutual information value indicates a better fusion result, as it means that the fused image contains more information from both original images as in (27 -29), [3].
Where MI (F, MRI) is the mutual information between fused image F and MRI.

D. Overall Image Fusion Performance
The overall performance is measured based on the discrepancy Di and the average gradient AG i .If the fusion technique produces a small amount of overall performance (Op) then the fused image will have greater overall fusion quality.It can be described as in (30), [3].A comparison between the proposed fusion method and the existing methods employing four different datasets is summarized in Table I

Fig. 5 . 2 ) 3 ) 4 ) 5 )
As a first step, MRI and PET images are resized to 256×256 pixels.The main steps of the proposed preprocessing include; 1) Converting PET image into a binary image.Filing the holes of the PET binary image to obtain a mask.Applying morphological operations to clean up the mask.Multiplying the mask by the MRI image to obtain the segmented MRI with the original pixels' values.Applying the Gaussian filter to obtain the smoothed MRI as shown in Fig. 6.
MI(F,O) is the mutual information between images F and O,P(F,O) is the joint probability distribution of the pixel intensities in images F and O, P(F)is the marginal probability distribution of the pixel intensities in image F, and P(O) is the marginal probability distribution of the pixel intensities in image O.To calculate the MI between the fused image (F) and (PET, MRI) images, the MI values for both pairs (F, PET) and (F, MRI) are computed as in (28) and (29):

TABLE I THE
-Table IV.It is obvious from the results that, the proposed method successfully fused MRI and PET images, by achieving the lowest mean Di, highest mean AG i , lowest OP, and highest mean MI .FUSION METHODS FOR ALZHEIMER'S DISEASE DATASET 1

TABLE II THE
FUSION METHODS FOR CORONAL NORMAL BRAIN DATASET 2

TABLE III THE
FUSION METHODS FOR AXIAL NORMAL BRAIN DATASET 3

TABLE IV THE
FUSION METHODS FOR SAGITTAL NORMAL BRAIN DATASET 4