1. Introduction
Scientists have been studying the brain and nervous system for many years. Efforts to understand neuroscience contribute to improved health, save lives, and lower medical costs. Despite significant progress, there is still much that remains unknown about the brain [1] (pp. 4–21). Neuroscience is divided into various subfields to facilitate the study of the brain. One important area is neuroimaging, which employs different imaging techniques to explore the structure and function of the nervous system in a non-invasive manner. Each technique reveals distinct aspects of the nervous system [2] (pp. 459–469).
Neuroimaging techniques include structural and functional imaging. Structural imaging provides a visual representation of the brain’s anatomy, helping doctors and researchers examine brain tissues, fluids, fat, lesions, and more. Common examples of structural imaging include Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). MRI can be utilized in various forms, such as T1-weighted, T2-weighted, Proton Density (PD), Fluid-Attenuated Inversion Recovery (FLAIR), Diffusion-Weighted Imaging (DWI), and Diffusion Tensor Imaging (DTI) [3] (pp. 411–417). In contrast, functional imaging reveals how the brain operates. It tracks brain activity, blood flow, metabolism, and other changes that occur in response to specific tasks or during periods of rest. Common examples of functional imaging include Functional Magnetic Resonance Imaging (fMRI), Positron Emission Tomography (PET), and Single-Photon Emission Computed Tomography (SPECT) [4] (pp. 486–488).
Neuroimaging techniques are employed in a process called brain mapping, which aids in studying the structure and function of the nervous system. Brain mapping is analogous to creating geographical maps. Just as a map of a city helps us understand its layout and organization, a brain map allows us to comprehend the arrangement of the brain. To create these maps, scientists utilize systems such as coordinate frameworks, naming hierarchies, and various imaging modalities to represent the brain from different perspectives. One of the primary objectives of brain mapping is to establish standard templates that define the outlines of different brain regions [5].
Brain templates provide a standardized three-dimensional (3D) framework for analyzing brain data. These templates are constructed from one or more individual brains and can represent both structural and functional characteristics. By creating templates from a group of brains, researchers can uncover details that may be obscured in a single brain due to noise or individual variability [6]. These templates serve as a common reference space, allowing researchers to spatially normalize individual scans for group comparisons and statistical analyses [6,7]. Additionally, they facilitate brain tissue segmentation and the labeling of regions of interest. Figure 1 illustrates examples of brain templates created from various imaging modalities, including T1- and T2-weighted MRI, PD [8], PET [9], FLAIR, and DTI [10].
One of the earliest brain templates was the Talairach and Tournoux atlas, created in 1988. This atlas was based on a set of hand-drawn images of the right hemisphere derived from postmortem sections of a 60-year-old French female’s brain [11]. While it played a foundational role in brain mapping, its limitations—such as being based on a single brain and lacking digital precision—reduced its generalizability.
One of the first widely adopted digital brain templates was developed by the Montreal Neurological Institute (MNI) in 1993, identified as MNI-305. This template was constructed by averaging MRI scans from 305 young, healthy, right-handed Caucasian subjects [12]. Following the development of the MNI-305, the International Consortium for Brain Mapping (ICBM) developed the ICBM-152 template in 2001, using MRI scans from 152 Caucasian adults [13]. In 2003, the ICBM-452 template was introduced, created from a larger and more ethnically diverse sample, which improved its signal-to-noise ratio (SNR) [14]. However, since the majority of the subjects were Caucasian, there are ongoing concerns about the generalizability of these templates to non-Western populations.
To address the limitations of general-purpose brain templates, several population-specific templates have been developed to enhance the accuracy of brain mapping. For instance, the Chinese56 template was created in 2010 using scans from 56 young Chinese males. This template exhibited morphological differences compared with the ICBM-152 template, resulting in reduced deformation during registration [15]. Similarly, the Indian-157 template was established in 2018 from scans of 157 Indian participants and demonstrated better alignment for Indian scans compared with population-mismatched templates [16]. Other noteworthy templates include BRAHMA, which is based on T1- and T2-weighted MRI and FLAIR scans from 113 Indian subjects [17] and showed accurate segmentation. In 2020, two additional templates were introduced: one based on Caucasian data (US200) and another based on Chinese data (CN200). Both templates showed improved tissue segmentation and registration accuracy when used with population-matched scans [18]. Furthermore, the Chinese-PET templates were developed in 2021 using 116 PET scans of healthy Chinese participants. This template enhanced brain function analysis by minimizing deformation during registration [19].
Templates have become increasingly representative of specific populations when considering factors such as gender and age. The more tailored the population, the more representative the template is. For example, the Indian brain template (IBA100) takes into account both gender and nationality [20]. Several templates also incorporate age along with nationality. Notable examples include the Chinese-2020 template [21], the Chinese-children template [22], the Korean Normal Elderly template (KNE96) [23], the Indian Brain Template (IBT) [24], the Oxford-MultiModal-1 (OMM-1) template [25], the Chinese-babies template [26], and the preterm and term-born brain templates [27]. Additionally, some templates take into account nationality, gender, and age during their construction. Examples include the Korean template [28], the Chinese-1000 template [29], and the Chinese-pediatric template (CHN-PD) [30].
We have observed that numerous brain templates have been constructed for various populations; however, to the best of our knowledge, none have been specifically developed for the Saudi population. Therefore, the aim of this work is to construct a structural brain template for Saudis using T1-weighted MRI scans. To guide our selection of an appropriate methodological approach, we first review relevant studies from a computational perspective. This includes an analysis that supports our methodological choices, followed by a statement of our contributions, which is presented in Section 2, Related Work. Section 3, Materials and Methods, describes the experimental setup and procedures. It is organized into subsections detailing the Dataset (Section 3.1), the Preprocessing steps (Section 3.2), the Methodology for Template Construction (Section 3.3), and the Evaluation Methods (Section 3.4) that we used to assess our approach. Section 4, Results, presents our experimental findings, followed by Section 5, Discussion, which interprets these findings in relation to existing literature and highlights their implications. We also discuss the limitations of our study and propose directions for future research. Finally, Section 6, Conclusions, summarizes the key contributions of this work, and a list of symbols is provided at the end for reference.
2. Related Work
Numerous studies have utilized various methods to create templates, each driven by specific objectives. However, these methods share some common goals: achieving unbiasedness, ensuring high-quality images with sharpness, contrast, and robustness to outliers, and maintaining computational efficiency. Achieving unbiasedness means that the templates do not overly resemble any individual, whether in shape (structure), appearance (intensity), or both. This ensures that the template accurately represents the general population rather than being skewed toward specific individuals. Creating high-quality template images is crucial for accurate image registration, segmentation, and subsequent analysis. Key considerations include sharpness, which defines the clarity of edges and fine details, and contrast, which refers to the intensity differences between tissues. Equally important is robustness to outliers, which minimizes the impact of intensity variations caused by registration errors, normalization inaccuracies, or other factors. In the context of fusing aligned images to create a template, this robustness ensures that such variations do not disproportionately influence the resulting image. Finally, enhancing the computational efficiency of template construction is vital to improving the practicality of this process, particularly in large-scale studies. In the following section, we will review these studies, highlighting the specific techniques they employed to achieve their respective objectives. Additionally, Table 1 summarizes these studies and the techniques they utilized.
In 2003, Rueckert et al. [31] constructed unbiased templates from 25 T1-weighted MRI scans using statistical deformation models (SDMs) and non-rigid registration techniques. These methods ensured that the average anatomical representation reflected population variability. In 2004, Jongen et al. [32] developed an average brain image from 96 CT scans through a two-step process. First, they created a temporary average based on a subset of images. Then, they performed iterative registration of all images to this temporary average until convergence. Also in 2004, Joshi et al. [33] created unbiased templates from T1-weighted MRI scans of 50 subjects by iteratively minimizing the dissimilarity of both deformation and intensity between the population images and the average. In 2006, Christensen et al. [34] proposed a method that employs inverse consistent image registration to minimize correspondence errors, ultimately producing unbiased population average estimates from 22 T1-weighted MRI scans. Instead of merely averaging the intensities of population images mapped into a single reference space, they enhanced template sharpness by transforming the reference into the population space and averaging the resulting transformations. In 2008, Noblet et al. [35] introduced a symmetric non-rigid image registration method for constructing an average image template using 15 T1-weighted MRI scans. Their approach involved performing pairwise registrations and centering the resulting template by ensuring that the sum of all deformation fields equals zero. This method is both computationally and memory-efficient, as it relies exclusively on pairwise registrations and assumes that the deformation fields are invertible. In 2010, Avants et al. [36] applied symmetric group-wise normalization (SyGN) to T1-weighted MRI scans from 16 subjects to construct an optimal template that was unbiased in both shape and appearance within diffeomorphic space. Also in 2010, Coupé et al. [37] improved templates constructed using 20 T1-weighted MRI scans by enhancing robustness to outliers, alongside sharpness and contrast. They replaced the simple voxel-wise averaging method with a patch-based median intensity estimation within the minimum deformation template (MDT) algorithm [38], which better tolerates incorrect data values than mean-based approaches. The MDT algorithm, made publicly available in 2011 by Fonov et al. [38], was used to construct unbiased templates from 542 T1-, T2-, and PD-weighted MRI scans. Their iterative method, building on earlier works [39,40,41], aimed to minimize the mean squared differences in deformations and intensities between the template and the population at each iteration. To enhance sharpness and preserve anatomical detail, they incorporated the Automatic Nonlinear Image Matching and Anatomical Labeling (ANIMAL) algorithm [42]. In 2014, Zhang et al. [43] proposed the Volume-based Template Estimation (VTE) method using T1-weighted MRI scans from 42 subjects. This method is based on Bayesian estimation within a diffeomorphic random orbit model, which preserves the topology of brain structures and maintains image contrast without requiring cross-subject intensity averaging. In 2017, Yang et al. [44] addressed the issue of robustness to outliers while also improving sharpness and contrast in the construction of diffusion MRI templates using data from 20 subjects. They replaced traditional voxel-wise averaging with a patch-based mean-shift algorithm in wave-vector space, commonly referred to as q-space. The mean-shift algorithm [45] seeks the mode of the data distribution, providing a more robust alternative to conventional voxel-wise averaging. In 2018, Schuh et al. [46] employed a group-wise construction method to build unbiased templates from 275 T2-weighted MRI scans. Their approach involved global affine normalization, followed by deformable registration using the stationary velocity free-form deformation (SVFFD) algorithm. They enhanced sharpness through topology-preserving alignment, utilized fewer brain images per template, and applied a Laplacian sharpening filter as a post-processing step. Notably, their method achieved linear computational scalability, which contrasts with the quadratic scalability of other approaches. Also in 2018, Parvathaneni et al. [47] developed unbiased cortical surface templates from T1-weighted MRI scans of 41 subjects. They incorporated the covariance matrix from the feature space as prior knowledge in their weighting strategy, which effectively down-weights similar subjects. This allowed them to capture greater population variation while maintaining unbiasedness. In 2019, Dalca et al. [48] introduced a learning-based approach for constructing templates using convolutional neural networks (CNNs) trained on the MNIST dataset [49] with 11 classes from the Google QuickDraw dataset [50], and 7829 T1-weighted MRI scans. Their method leveraged shared information across these datasets to generate unbiased population templates conditioned on combinations of features such as age, gender, and disease status. They achieved sharpness by learning image representations that minimize spatial deformations. Unlike traditional iterative methods, which can be expensive, their approach learned a function to generate templates on demand without requiring manual data partitioning. In 2020, Ridwan et al. [51] constructed unbiased templates from 222 T1-weighted MRI scans using the widely adopted iterative technique as outlined by Joshi et al. [33], Fonov et al. [38], Guimond et al. [52]. They ensured template sharpness by incorporating high-quality scans and ensuring accurate spatial matching during the construction process. Also in 2020, Wang et al. [53] proposed a symmetric model construction (SMC) approach to generate unbiased templates from four synthetic images, 20 synthetic 3D volumes, and 20 T1-weighted MRI scans. By avoiding the use of an initial reference, their method directly determined the final unbiased template structures. To enhance sharpness, they eliminated the blurring effects typically introduced by mathematical averaging and minimized differences in both intensity and gradient information between the template and the population. This approach reformulated the registration challenge into a series of pairwise registration problems, reducing the computational cost to , where N is the total number of images. In 2023, Gu et al. [54] constructed templates from 646 T1-weighted MRI scans by incorporating deep learning (DL) techniques. They improved template sharpness using DL-mapping for image enhancement, employing CNNs with ResBlock modules. For computational efficiency, they utilized a fast DL-based registration method [55] to accelerate inter-subject registration during the template construction process. Finally, in 2023, Arthofer et al. [25] constructed unbiased templates from 240 multimodal MRI scans (T1, T2-FLAIR, and DTI) using an iterative unbiased approach described by Fonov et al. [38]. To avoid bias toward any initial reference, they computed an unbiased affine template by determining the mid-space across all subjects. They enhanced template sharpness and contrast by applying voxel-wise median calculations during the construction process.
Numerous studies have proposed methods for constructing templates, often aiming to achieve one or more of the following goals: unbiasedness, high image quality (including sharpness, contrast, and robustness to outliers), and computational efficiency. The SMC approach introduced by Wang et al. [53] provides a non-iterative, computationally efficient method for obtaining an unbiased structural template by leveraging the symmetry present in datasets. However, this assumption of symmetry can be problematic in real-world datasets, where population asymmetries may introduce bias. A similar bias issue was addressed by Parvathaneni et al. [47], who proposed a feature-based weighting scheme to down-weight contributions from over-represented data points when constructing cortical surface templates. This strategy inspired us to adopt a comparable weighting scheme within the SMC framework to enhance unbiasedness.
While the SMC method yields an unbiased structural template, it does not estimate the template intensities. To address this, we first align the population images to the template and then fuse their intensities to create the final template image. We observed that some studies improved template image quality through post-processing or by utilizing advanced techniques for fusing aligned population images. Previous works such as Coupé et al. [37] and Yang et al. [44] focused on enhancing template image quality (in terms of sharpness, contrast, and robustness to outliers) during the fusion of aligned population images. For instance, Coupé et al. [37] employed a patch-based median intensity estimation within the MDT algorithm [38] for T1-weighted MRI, while Yang et al. [44] applied a patch-based mean-shift algorithm in q-space to construct diffusion MRI templates. These methods inspired us to use patch-based intensity estimation, drawing on both approaches, and specifically tailored for T1-weighted MRI scans.
Furthermore, prior work by Miolane et al. [56] has highlighted the trade-off between achieving unbiasedness and preserving sharpness in a single population template. They recommended constructing multiple templates for homogeneous subgroups to mitigate this issue. In line with this recommendation, we selected a highly homogeneous subset of T1-weighted MRI scans from Saudi subjects. This choice aims to improve anatomical sharpness in the resulting template while maintaining unbiasedness.
Table 1Summary of template-related studies on template construction, presenting key information regarding their publication year, datasets used, and the specific approaches employed to ensure unbiasedness, image quality, and/or computational efficiency.
Year | Study | Dataset | Unbiasedness | Image Quality | Efficiency | ||
---|---|---|---|---|---|---|---|
Sharpness | Contrast | Robustness | |||||
2003 | Rueckert et al. [31] | 25 T1 MRI scans | SDMs + non-rigid registration | – | – | – | – |
2004 | Jongen et al. [32] | 96 CT scans | – | – | – | – | Two-step iterative average construction |
2004 | Joshi et al. [33] | T1 MRI scans of 50 subjects | Iterative minimization of deformation and intensity dissimilarity | – | – | – | – |
2006 | Christensen et al. [34] | 22 T1 MRI scans | Inverse consistent image registration | Averaging reference transformations | – | – | – |
2008 | Noblet et al. [35] | 15 T1 MRI scans | – | – | – | – | Symmetric pairwise non-rigid registration with invertible fields |
2010 | Avants et al. [36] | T1 MRI scans of 16 subjects | SyGN method | – | – | – | – |
2010 | Coupé et al. [37] | 20 T1 MRI scans | MDT algorithm | Patch-based median estimation | – | ||
2011 | Fonov et al. [38] | 542 T1, T2, and PD MRI scans | MDT algorithm | ANIMAL registration algorithm | – | – | – |
2014 | Zhang et al. [43] | T1 MRI scans of 42 subjects | – | VTE method | – | – | |
2017 | Yang et al. [44] | Synthetic + diffusion MRI | – | Patch-based mean-shift algorithm | – | ||
2018 | Schuh et al. [46] | 275 T2 MRI scans | group-wise method | Topology-preserving alignment, Laplacian sharpening | – | – | Linear scaling |
2018 | Parvathaneni et al. [47] | T1 MRI scans of 41 subjects | Feature-space covariance weighting | – | – | – | – |
2019 | Dalca et al. [48] | MNIST + QuickDraw + 7829 T1 MRI scans | Leveraging shared information | Reducing spatial deformations | – | – | Function to generate templates on demand |
2020 | Ridwan et al. [51] | 222 T1 MRI scans | Unbiased iterative technique | High-quality scans and accurate spatial matching | – | – | – |
2020 | Wang et al. [53] | 4 synthetic images + 20 synthetic 3D volumes + 20 T1 MRI | SMC approach | Iterative minimization of intensity/gradient dissimilarity | – | – | |
2023 | Gu et al. [54] | 646 T1 MRI scans | – | DL-mapping sharpening | – | – | Fast DL-registration |
2023 | Arthofer et al. [25] | 240 multimodal MRI scans | Unbiased iterative with mid-space affine | Voxel-wise medians | – | – |
In this work, we construct what is, to our knowledge, the first structural brain template based on a homogeneous subset of T1-weighted MRI scans from Saudi females. Our contributions address existing gaps in the literature and aim to achieve an unbiased structural representation, high image quality (in terms of sharpness, contrast, and robustness to outliers), and computational efficiency. The main contributions of this work are as follows: New Population-Specific Template: We introduce a structural template derived from T1-weighted MRI scans of healthy Saudi female subjects aged 25 to 30. This template addresses a significant gap in the representation of the Saudi population in neuroimaging. Unbiased Template Structure with Weighting: We incorporate a covariance-based weighting scheme [47] into the SMC framework [53] to mitigate bias toward over-represented anatomical structures. High-Quality Intensity Estimation: We apply a patch-based intensity estimation approach, combining patch-based median estimation and the mean-shift algorithm, specifically tailored for T1-weighted MRI scans. This technique produces sharper templates with enhanced tissue contrast and robustness to outliers, outperforming traditional voxel-wise averaging. Computational Efficiency Enhancements: We enhance processing speed through the parallelization of independent tasks, which further improves the efficiency of the SMC framework. Additionally, we optimize matrix operations by using vectorization and filter out zero-intensity voxels during the patch-based intensity estimation process.
We expect this template to be a valuable resource for neuroimaging studies focused on the Saudi population. We also anticipate that integrating a weighting scheme within the SMC framework will reduce bias toward over-represented brain structures. Moreover, we expect that using the patch-based approach—combining median estimation with the mean-shift algorithm—will produce sharper templates with enhanced tissue contrast and robustness to outliers compared with traditional voxel-based averaging. Finally, we believe that a population-specific template, being more representative of the target group, will better preserve anatomical structures during registration.
3. Materials and Methods
This section outlines the materials and methods utilized in this study. We begin by presenting the dataset employed, followed by a description of the preprocessing steps applied to it. Next, we detail the methodology used for constructing the template. Figure 2 illustrates the overall workflow, from the raw input scans to the final brain template. Finally, we describe the evaluation metrics used to assess the quality of the constructed templates.
The implementation was carried out using Google Colaboratory [57]. To enhance computational efficiency, independent processes were executed in parallel. Additionally, vectorized operations were utilized for matrix computations to further optimize performance.
3.1. Dataset
To construct and evaluate the structural brain template, we utilized a dataset consisting of 11 T1-weighted MRI scans from healthy Saudi female subjects aged 25 to 30. These scans were acquired in the Neuroimaging Informatics Technology Initiative (NIfTI) file format from King Abdulaziz University Hospital. These NIfTI files contain both a header and image data within a single file. The header stores metadata, which includes descriptive information such as file details, scanner parameters, spatial orientation, coordinate system, matrix/voxel sizes, etc. The image data consists of a matrix that stores voxel intensity values. Figure 3a illustrates a simplified representation of NIfTI file storage.
The matrix size indicates the number of voxels—small 3D cubes analogous to 2D pixels—along the x-, y-, and z-axes of the scans. Voxel size refers to the physical volume of each voxel, typically measured in cubic millimeters (mm3), representing the resolution of the scanned region. The scans used in this study contain 3D data matrices, as visualized in Figure 3b. Their coordinate system follows the Right–Anterior–Superior (RAS) convention, meaning the x-axis goes from right to left, the y-axis from anterior to posterior, and the z-axis from superior to inferior, as illustrated in Figure 3c.
The dataset was split into seven scans for template construction and four scans for evaluation. This dataset was selected due to its availability and the homogeneity of the subjects, which is beneficial for constructing an unbiased and sharp template [56].
3.2. Preprocessing
Before constructing the brain template, we preprocessed the raw MRI scans in parallel to ensure accuracy and consistency. This crucial step addressed several key challenges, including the following: Variability in raw data, which includes differences in matrix size, voxel size, spatial orientation, and intensity ranges. Scanner artifacts, such as bias fields and noise, which can affect image quality. Removal of irrelevant anatomical structures, such as non-brain regions, to create a brain-specific template.
To tackle these challenges, our preprocessing included several steps, each designed to standardize the data. These steps are also outlined in the pseudocode of Algorithm 1.
Algorithm 1 Preprocessing Algorithm. |
1:. Input: Raw scans ▹ Each scan has different image and voxel dimensions 2:. Output: Preprocessed images ▹ Each with image size and voxel size mm 3:. for each do ▹ Processing in parallel 4:. 5:. 6:. 7:. 8:. 9:. end for |
3.2.1. Spatial Normalization
Spatial normalization is a preprocessing step performed prior to template construction. This process involves aligning individual scans to a standard template space, effectively removing variations in brain position, orientation, size, and shape across individuals [58]. By performing this step, we ensure that the scans are comparable and exist within a similar space, as illustrated in Figure 4.
We used the updated version of MNI152 space (ICBM 2009c Nonlinear Asymmetric template) as a standard space [38,59]. We opted for the asymmetric version because it more closely resembles realistic scans. We also selected the version with () resolution to enable resampling in a higher-resolution space. This standard space is archived in the TemplateFlow archive [8].
We employed affine registration, a linear but non-rigid transformation, to align the scans with the standard space. This method uses 12 degrees of freedom (DOF), which include rotation, translation, shearing, and scaling in the x, y, and z dimensions [60]. We performed the affine registration using FMRIB’s Linear Image Registration Tool (FLIRT, version 6.0) provided by FMRIB Software Library (FSL, version 6.0.5.2) [61,62,63]. The cost function we used was normalized cross-correlation, as it is well-suited for intramodality registration. After completing the affine registration, the images were resampled into the standard space using spline interpolation. Spline interpolation was chosen for its ability to accurately preserve anatomical details while providing smooth transformations. Due to the potential for spline interpolation to introduce negative values, these values were set to zero to maintain valid image intensities.
3.2.2. Bias Field Correction
The magnetic field within the scanner is not uniform, which can cause artifacts in the scan that alter the intensity values. This artifact is known as bias field or intensity inhomogeneity [64]. The bias field can cause the same tissue to have different intensity values, affecting subsequent image processing [65]. Therefore, it should be corrected as a preprocessing step for constructing the brain template.
We corrected the bias field by applying the N4 algorithm [66] using the Simple Insight Toolkit (SimpleITK, version 2.4.0) [67]. The N4 algorithm assumes that the corrupted image combines the true underlying image and the bias field, with negligible additional noise. It estimates these merged parts iteratively using a hierarchical optimization scheme (i.e., a multi-resolution scheme) where the image is processed at increasing levels of resolution. This iterative process effectively estimates and corrects for the bias field. Figure 5 shows an image, from the spatially normalized images, before and after using N4 and the estimated bias field.
3.2.3. Denoising
Noise in MRI scans is a random variable that contributes to the detected signal. This noise arises from various sources, including thermal noise in the scanner and the lossy interactions between the scanner and the scanned body [68]. Denoising is essential to improve the SNR, which can significantly impact subsequent image processing, analyses, and quantitative measurements [69,70]. Accurate denoising is crucial for template construction, as it ensures that the template reflects true anatomical features rather than noise artifacts.
We applied the block-matching and 4D filtering (BM4D, version 4.2.4) algorithm to denoise our images. BM4D is a powerful denoising technique that exploits local and nonlocal correlations between voxels to effectively separate signal and noise while preserving sharp edges [71,72,73]. The BM4D algorithm requires an initial estimation of the noise standard deviation (SD). We estimated the noise SD from the image background where no anatomical structures were present. Figure 6 visualizes an image, from the bias field corrected images, before and after the process of denoising, along with the estimated noise.
3.2.4. Brain Extraction
Brain extraction, also known as skull stripping, is the process of separating the brain from the non-brain regions, reducing unwanted information that could interfere with subsequent processes. This essential preprocessing step facilitates various image processing tasks for the brain region, including intensity normalization, registration, template construction, and tissue segmentation [74,75,76]. However, brain extraction can be skipped when constructing head templates.
To extract the brains, we applied a fast and high-resolution method called deepbet 3D (version 1.0.2). This DL-based method was trained on 568 T1-weighted MRI scans of healthy adults and utilizes LinkNet [77], a modern architecture built upon the UNet framework [78], to perform the extraction in two stages. In the first stage, the model predicts an initial mask, which is then used to crop the MRI scan, focusing specifically on the brain region. In the second stage, the cropped MRI scan undergoes further processing to predict a more accurate final brain mask [79]. Figure 7 shows the final estimated mask overlaid on the entire head for a sample image from the denoised images, along with the excluded non-brain regions as well as the final extracted brain.
3.2.5. Intensity Normalization
The intensity values of MRI scans are influenced by factors related to both the inherent properties of the tissue being scanned and scanner-related parameters [80]. MRI scan intensities, unlike typical image intensities that range from 0 to 255, start at zero and have no upper limit, as the important thing is that there is contrast between tissues, regardless of the specific intensity values. This situation can lead to inconsistent intensity interpretation across different scans. Thus, performing intensity normalization as a preprocessing step is crucial to ensure that the images are on a consistent scale, improving the quality and reliability of medical imaging processes and analyses [81,82] and the brain template construction.
We normalized the image intensities using the piecewise linear histogram matching (PLHM) method, which involves two main stages: training and transformation. During the training stage, standard histogram landmarks are learned from a set of images. Then, in the transformation stage, the intensity of each image is mapped to the learned standard histogram [83]. We applied the PLHM implementation wrapped in a tool named intensity-normalization (version 2.2.4) [82], where a predefined lower and upper bound of 0 and 100 is arbitrarily set for the standard histogram. Figure 8 illustrates the intensity histograms of the brain-extracted images before and after the intensity normalization process.
3.3. Template Construction
Our template construction methodology comprises two main parts: First, obtaining unbiased template structures using SMC [53]. SMC directly estimates the unbiased template structure without iterative optimization. We further incorporate a covariance weighting scheme, based on the work of Parvathaneni et al. [47], to account for any asymmetry in the population, ensuring that the template is not biased towards over-represented brain structures. Second, estimating the template intensity using patch-based estimation, inspired by the work of Coupé et al. [37], Yang et al. [44]. Patch-based estimation provides robustness to outliers and helps to preserve image details, leading to a high-quality template image. The details of each step are explained in the following sections, and the full procedure is outlined in the pseudocode of Algorithm 2.
Algorithm 2 Template Construction Algorithm. |
1:. Input: Preprocessed images ▹ Each with image size and voxel size mm 2:. Output: Template T ▹ With image size and voxel size mm 3:. Step 1: Covariance Weighting 4:. ▹ Processing in parallel, 5:. ▹ 6:. ▹ 7:. ▹ 8:. ▹ 9:. Step 2: Weighted SMC 10:. 11:. for each do ▹ Processing in parallel 12:. ▹ 13:. end for 14:. ▹ 15:. ▹ 16:. for each do ▹ Processing in parallel 17:. ▹ 18:. end for 19:. Step 3: Patch-Based Mean-Shift Estimation ▹ Vectorized 20:. ▹ Initialize template of size 21:. ▹ Initialize iteration counter 22:. ▹ Maximum number of iterations 23:. ▹ Set of K nonzero voxel indices 24:. while True do 25:. ▹ 26:. ▹ 27:. ▹ 28:. ▹ 29:. ▹ 30:. ▹ 31:. ▹ 32:. 33:. if or then 34:. ▹ The final template values 35:. break 36:. else 37:. ▹ Update the iterations counter 38:. end if 39:. end while |
3.3.1. Covariance Weighting
This step adapts the approach described by Parvathaneni et al. [47] for constructing unbiased cortical surface templates. The core idea is to deweight similar data points to maximize the captured variance within the population. We began by extracting 107 radiomic features (F) from each preprocessed image (I) in parallel using the PyRadiomics Python package (version 3.0.1) [84]. These features, encompassing First Order Statistics, Shape, and Texture characteristics, were extracted in segment-based mode, yielding a single value per feature for the brain region. This comprehensive set of features provides a quantitative representation of the image data.
To reduce the dimensionality of the feature space and mitigate potential issues with multicollinearity, we applied Principal Component Analysis (PCA) [85] to the extracted features (F). We retained the top five principal components () which captured 95% of the total variance in the data. We then calculated the covariance matrix () of these principal components. Next, we computed the pseudo-inverse (Moore–Penrose inverse) [86] of the covariance matrix, denoted as .
For each image (), we calculated its weight () by summing the elements in the row of . This sum reflects the image’s dissimilarity to the rest of the population; a larger sum indicates greater dissimilarity. We then normalized these weights by dividing each weight by the sum of all weights, resulting in normalized weights that sum to one. This normalization ensures that the weights can be interpreted as proportions and is useful for subsequent steps. Table 2 presents the calculated similarity weight for each image. Higher weights indicate that an image is more distinct from the rest of the population, while lower weights suggest greater similarity.
3.3.2. Weighted SMC
This step adapts the SMC method proposed by Wang et al. [53] for unbiased template construction. The SMC method assumes that any image in the population can reach the population center directly without iterative averaging, thereby improving efficiency. However, when applied to potentially asymmetric population images, this direct approach can be sensitive to biases introduced by groups of similar images. To address this, we incorporate the similarity weights derived in the previous step to guide the center calculation and mitigate potential biases.
The process begins by selecting any image () from the preprocessed images (I). Next, we align or register to each in I in parallel, such that , to obtain the displacements (). We performed this alignment using the Symmetric Diffeomorphic Normalization (SyN) algorithm from the Advanced Normalization Tools in Python (ANTsPy, version 0.5.4) [87]. SyN is a robust nonlinear registration algorithm known for its ability to handle complex deformations while preserving anatomical topology [88,89,90].
To account for the varying similarity of images in the population, we calculate a weighted displacement () that incorporates the normalized weights () obtained in the previous step (Section 3.3.1). Specifically, is calculated as the weighted sum of the individual displacements:
(1)
The weighted displacement is then applied to using the ApplyTransforms function of the ANTsPy [87], resulting in , where v denotes voxel indices. This yields the moved image , which represents the weighted center () of the set I. Finally, all remaining in I are aligned to in parallel using SyN registration to facilitate further processing of the template’s voxel intensities in Section 3.3.3. This weighted SMC step is also visualized in Figure 9 for further illustration.
Figure 10 visualizes the image , along with the weighted displacement () and the weighted center (), as well as the displacement () and the center () without weighting, which was calculated for further evaluation in Section 3.4.1. In Figure 11, we visualize a toy example on a 2D plane to illustrate the incorporation of similarity weights as prior knowledge to the center () computation. When the voxel locations (v) are distributed symmetrically, they receive similar weights. Therefore, using the weight information to compute has no effect on the result. However, when the v are distributed asymmetrically, they receive different weights. In this case, incorporating the weight knowledge reduces the bias of towards the similar subset.
3.3.3. Patch-Based Mean-Shift Estimation
To obtain the final template, we fuse the intensities of the aligned population images () using a robust and adaptive patch-based estimation scheme. This approach addresses the limitations of the voxel-based simple averaging method, which is susceptible to blurring and the influence of outliers that may arise from imperfect image alignment. Our method builds upon the work of Coupé et al. [37], Yang et al. [44], which demonstrated the advantages of patch-based estimation [91] for constructing sharp and robust templates. They replaced the voxel-based simple averaging method with a patch-based median estimation [37] and patch-based mean-shift algorithm [44]. Here, the median can tolerate incorrect data values, and the mean-shift algorithm [45] seeks the mode of the data distribution, providing a more robust alternative to the voxel-based simple averaging method.
We initialize the template using the median intensities of the aligned images (). To improve the efficiency of the iterative estimation, we leverage matrix vectorization and restrict computations to nonzero voxels only. We define a set of nonzero voxel indices (V) of length K. For each voxel index v in V, we extract a patch centered at v from both the template () and each aligned image in the set ().
Next, we compute the Euclidean distances (D) between the template patch and each corresponding patch in the aligned images. These distances are then used to compute weights w using a Gaussian kernel:
(2)
where h, the median of the distances D, serves as a dynamic Gaussian bandwidth parameter. This parameter controls the decay of the exponential function, thereby affecting how each image’s voxel intensity influences the template update. Figure 12 illustrates the computed D between the template patch and each corresponding patch in the aligned images, alongside the function used to compute w.The weights are then normalized to sum to one, producing normalized weights (). These weights are used to compute a weighted average of the voxel intensities across the aligned images. At each iteration t, the template’s nonzero voxel intensities are updated as follows:
(3)
where represents the intensity values of the aligned image at the nonzero voxel indices.This process is repeated until the difference between successive templates () is less than , or until the maximum number of iterations is reached. Figure 13 visualizes the final template obtained from this patch-based estimation (), alongside a template generated using voxel-based simple averaging (), which is further evaluated in Section 3.4.2.
3.4. Evaluation Methods
Our evaluation of the constructed templates focuses on three aspects: Evaluating the structural unbiasedness of the templates computed in Section 3.3.2. Assessing the intensity quality of the templates computed in Section 3.3.3, in terms of sharpness, contrast, and robustness to outliers. Investigating the necessity of constructing a brain template specifically for Saudi adult females by evaluating its effectiveness as a target registration space in comparison with other population-specific templates.
3.4.1. Unbiasedness of Template Structure
To assess the impact of similarity weights on the unbiasedness of the computed centers, we compared the results obtained using similarity weights () with those obtained without (), as detailed in Section 3.3.2. For each image in the population, we computed the squared magnitude of its displacement from the corresponding center image ( or ). The displacement was obtained using the SyN registration of the ANTsPy [87]. To weight the displacement of each image according to its similarity within the population, we incorporated the similarity weight () computed in Section 3.3.1. This metric, which we refer to as weighted displacement ()—adapted from Wang et al. [53]—quantifies the degree of bias in the computed center; lower values indicate less bias:
(4)
where N is the total number of images in the population, is the displacement from image to the center image or , is the squared L2-norm, and is the similarity weight for image .3.4.2. Quality of Template Intensity
In this section, we assess the quality of the templates’ intensity obtained from both patch-based estimation () and voxel-based averaging () in Section 3.3.3 across three evaluation metrics.
Sharpness
To evaluate the sharpness and edge definition of the templates, we computed the magnitude of the gradient for each voxel. Sharp edges and well-defined details correspond to regions with rapid changes in intensity, which are reflected in high gradient magnitudes. To assess the overall sharpness, we averaged the gradient magnitudes across all voxels in the template. This metric—as used in Wang et al. [53]—which we refer to as Average Gradient Magnitude (), quantifies the overall sharpness of the template:
(5)
where is the intensity value of the template at voxel index v, ∇ is the gradient operator, and M is the total number of voxels in the template.Contrast
To evaluate the contrast between white matter (WM) and gray matter (GM) of the templates, we used the Normalized Michelson Contrast [92]. This metric provides a standardized measure of contrast by comparing the maximum intensity of WM to the minimum intensity of GM. To identify WM and GM voxels, we utilized the BrainSuite tool (version 23a) [93] to segment the templates into different tissue types. This allowed us to isolate voxels corresponding to pure WM and GM, excluding those with other tissue types. It is worth noting that we utilized the BrainSuite tool [93] on a local machine, not within the Google Colaboratory environment [57]. This metric, Normalized Michelson Contrast (), quantifies the contrast between WM and GM; higher values indicate greater contrast:
(6)
where is the maximum intensity value within the WM voxels, and is the minimum intensity value within the GM voxels.Robustness to Outliers
To assess the robustness to outliers of the templates, we used the Kullback-Leibler Divergence [94]. This metric, denoted as , measures the similarity between the intensity distributions of the templates and the population. We introduced an outlier image by adding noise to one of the images to make its intensity distribution significantly different. For each template, we computed the between its intensity distribution and the intensity distribution of each image in the population. Lower values indicate greater similarity between the two distributions, with a value of 0 indicating identical distributions:
(7)
where represents the probability of intensity value x in the template’s distribution, and represents the probability of intensity value x in the population image distribution.3.4.3. Usability of Saudi Brain Template
To investigate the necessity of constructing a population-specific brain template, we compared the deformations required to nonlinearly register new healthy Saudi adult female brain images to our proposed brain template, constructed using Patch-Based Mean-Shift Estimation (Section 3.3.3), as well as to other population-based templates. For ease of reference, we refer to our template as the Brain Template for Healthy Saudi Adult Females (BT-HSAF). Since population-specific characteristics, such as ethnicity, gender, and age, can influence brain morphology, it is crucial to compare templates with similar demographic features for accurate assessment. Therefore, we focused on templates that are similar to our BT-HSAF in terms of age, gender, or both. Specifically, we utilized the Caucasian (US200), Chinese (CN200) [18], and Indian (IBA100) [20] templates. To ensure accurate comparisons by removing linear variations, we first affinely aligned all templates using the ANTsPy [87].
To assess the representativeness of each template, we used the four evaluation scans described in Section 3.1. We extracted the brains using deepbet 3D [79]. Next, we affinely aligned the extracted brains to each template to account for linear variations. Then, we nonlinearly registered each brain to each template using SyN registration [87], resulting in deformation fields representing the transformations needed to warp each brain onto each template.
To quantify the local changes in brain volume during registration, we computed the mean of the logarithm of the Jacobian determinant ()—as used in Yang et al. [18]—for each deformation field. provides information about the voxel-wise volume changes, where positive values indicate expansion and negative values indicate compression. We used the CreateJacobianDeterminantImage function of ANTsPy [87] to calculate the . Values of closer to zero indicate fewer deformations, suggesting a higher similarity between the template and the population. It is calculated as follows:
(8)
where is the Jacobian matrix at voxel index v, is the determinant of the Jacobian matrix at voxel index v, and the absolute value ensures that the logarithm is defined, as the determinant can be negative.4. Results
This section presents the results of evaluating three key aspects: The unbiasedness of the template structure computed with versus without incorporating weights. The quality of template intensity using patch-based estimation versus voxel-based averaging. The necessity of using a brain template specifically tailored to healthy Saudi adult females as the standard space for registering subjects from the same population.
4.1. Unbiasedness of Template Structure
Table 3 presents the sum and average of over all voxels for both and . The sum of for (33,566,831.060) was lower than that of (33,735,950.577), as was the average (: 3.935, : 3.955). The lower values for indicate that incorporating similarity weights resulted in a center image with reduced bias compared with . This finding suggests that weighting images based on their similarity to the population can lead to a more representative and unbiased template.
4.2. Quality of Template Intensity
In this section, we summarize the results of assessing the quality of the templates’ intensity obtained from both patch-based estimation () and voxel-based averaging () (visualized in Figure 13), conducted using , , and .
Sharpness
Figure 14 visualizes the gradient magnitude for and , with their corresponding values summarized in Table 4. The value for (60.958) was higher than that of (55.175). The higher value for indicates that the patch-based approach resulted in a template with sharper edges compared with the voxel-based averaging method. This finding suggests that patch-based estimation of template intensity can lead to sharper templates compared with traditional voxel-based averaging.
Contrast
Table 4 presents the values calculated for the pure WM and GM regions of the templates generated using the patch-based () and voxel-based () methods. Figure 15 visualizes these pure tissue regions in both templates. As shown in the table, the value for (0.418) is higher than that of (0.393), indicating higher contrast in the former. This suggests that the patch-based approach yields a template with enhanced contrast between these tissues compared with the voxel-based averaging method.
Robustness to Outliers
Figure 16 shows the distribution of the values calculated for each template ( and ) and the population images, with their median values summarized in Table 4. The median value for (0.057) is less than that of (0.368). Also, the value for the introduced outlier image is 4.159 with , while it is 0.001 with , indicating that the former intensity is more similar to the population and is less influenced by outliers. This finding suggests that patch-based estimation of template intensity results in a template that more accurately reflects the most common intensity values in the population and is less sensitive to outliers compared with traditional voxel-based averaging.
4.3. Usability of Saudi Brain Template
Figure 17 shows the distribution of the values calculated from the registration of healthy Saudi adult female brain images to the four standard spaces: BT-HSAF, US200, CN200, and IBA100, with their median values summarized in Table 5. The median value for BT-HSAF (−0.02368) is the closest to zero, followed by IBA100 (−0.02413), CN200 (−0.02513), and US200 (−0.02557). This indicates that registering healthy Saudi adult female subjects to the BT-HSAF template results in the least volume changes compared with the other templates. These findings highlight the importance of using a population-specific brain template when registering the healthy Saudi adult female subjects, as it is more similar to the population and can preserve anatomical volumes.
5. Discussion
This study aimed to construct a representative brain template for healthy Saudi adult females using a homogeneous subset of T1-weighted MRI scans. We addressed challenges related to variability in raw data, scanner artifacts, and irrelevant anatomical structures through a series of preprocessing steps. Our template construction methodology integrates techniques designed to produce an unbiased, sharp, and high-contrast template that is robust to outliers and computationally efficient. Furthermore, we compare key evaluation aspects of our approach with previous studies, as summarized in Table 6 and discussed below.
Our evaluation of unbiasedness, measured using voxel-wise , demonstrated that the weighted template () exhibited lower total and average compared with the unweighted template (). This suggests that the use of similarity weighting in our method effectively mitigates bias during template construction. This finding is consistent with the work of Parvathaneni et al. [47], who also found that weighted templates, derived from scan-rescan reproducibility datasets, yielded more stable and less biased results. Their evaluation was conducted using distance metrics such as mean square error (MSE) and average relative distance (ARD) on cortical surface averages, which also showed improved stability with weighted approaches. These results reinforce the importance of population-specific weighting, a principle central to our study.
In terms of image quality, our patch-based template () demonstrated superior image quality compared with the voxel-based template () across several key metrics. It achieved a higher , a higher , and a lower distribution. These metrics indicate that the patch-based method produces sharper images with better tissue contrast while being less sensitive to outliers. Coupé et al. [37] conducted a similar comparison between patch-based and voxel-based templates and found that the former offered superior contrast (with a higher ) and was less sensitive to outliers (with a lower ), further supporting the robustness of our approach. Additionally, Yang et al. [44] applied a patch-based method for diffusion MRI and reported significant improvements in fiber orientation distributions, peak signal-to-noise ratio (PSNR), and artifact reduction. These findings suggest that our patch-based template construction method aligns well with the successes observed in these two studies, further validating its potential for producing high-quality, robust templates.
Regarding usability, we found that our BT-HSAF exhibited the closest mLJD distribution to zero, indicating that it required the least deformation during registration compared with IBA100, CN200, and US200. This result suggests that the BT-HSAF is highly compatible with the healthy Saudi adult female population, ensuring accurate alignment during image registration. These findings align with the research by Sivaswamy et al. [20], which demonstrated that the IBA100 template minimized deformation and improved segmentation accuracy when applied to Indian subjects. Additionally, Yang et al. [18] found that using population-matched templates significantly reduced registration deformation and enhanced segmentation accuracy. Their study also highlighted the morphological differences between ethnic groups and genders, further emphasizing the importance of developing templates that are specific to the population being studied. This reinforces the need for developing Saudi-specific brain templates tailored to different population subsets, which is critical for preserving anatomical features and improving the reliability of neuroimaging analyses.
We enhanced the computational efficiency of our approach by leveraging the parallel processing capabilities of Google Colaboratory [57]. All preprocessing, construction, and evaluation steps were parallelized, which reduced the total computational time from X to approximately , where C represents the number of available processing cores. For instance, in the SMC method, the number of pairwise inter-subject registrations decreased from to . Additionally, we implemented vectorization in place of nested loops, particularly in the patch-based estimation, and excluded zero-valued voxels from calculations. These strategies significantly improved processing efficiency. While no formal timing benchmarks were recorded, we observed noticeably faster and more scalable computations as a result.
While this study provides valuable insights, it also has several limitations: The current template was constructed using only one subset of the Saudi population (Section 3.1), and templates for other subsets were not developed. The sample size for this subset is relatively small, which limits the generalizability of the resulting brain template despite the dataset’s homogeneity and restricts the potential for meaningful statistical comparisons. Linear characteristics of the subset (e.g., brain length, width, and height) were not addressed; these were normalized through affine spatial normalization (Section 3.2.1), while the focus remained on nonlinear anatomical details solely (Section 3.3.2). The similarity weighting step assigned a single weight per brain image (Section 3.3.1), applied uniformly across all voxels. In the template intensity estimation step (Section 3.3.3), all patches were included in each iteration without selective filtering.
Building on the findings and limitations of this study, the following research directions are recommended for further exploration: Constructing multiple brain templates for a broader range of Saudi population subsets, using sufficiently large sample sizes and accounting for variations in gender, age groups, and pathological conditions. Incorporating multiple imaging modalities (e.g., T2-weighted MRI, CT, fMRI, PET, DTI) to enhance both anatomical and functional relevance of the templates. Developing a comprehensive Saudi brain atlas that includes tissue probability maps and region labeling alongside various types of brain and head templates, providing a richer resource for neuroimaging studies. Integrating the developed atlas into widely used neuroimaging tools such as FreeSurfer [95], FSL [61,62,63], and SPM [96] to facilitate adoption in research and clinical workflows in Saudi Arabia. This integration could support automated segmentation, early abnormality detection, and treatment or surgical planning, particularly as advanced neuroimaging protocols become more common in clinical practice [97]. Replacing affine spatial normalization with rigid registration and directly incorporating linear anatomical characteristics into the template construction process to yield more representative templates. Using localized similarity weights (rather than a single global weight per image) to improve structural unbiasedness. Implementing early discarding of mismatched patches, as proposed by Coupé et al. [91], to reduce computational costs and improve robustness. Exploring the effects of different patch sizes on the quality of the constructed template.
6. Conclusions
This study introduced an integrated approach for constructing a representative and unbiased brain template specifically tailored to the healthy Saudi adult female population. By integrating several key techniques, we addressed critical challenges in template creation. Specifically, we combined the SMC method with a covariance-based weighting scheme to mitigate bias arising from dataset asymmetry and over-represented brain structures. Furthermore, we incorporated patch-based intensity estimation, which ensured high image quality, yielding a sharp, high-contrast template robust to outliers. Crucially, we used a homogeneous subset of MRI scans from Saudi subjects—a first for this population—which allowed us to create a template expected to be more representative and effective for registration purposes compared with non-population-specific templates. This newly developed brain template for Saudi adult females represents a valuable resource for future neuroimaging studies focused on this population, promising to improve the accuracy and reliability of anatomical analyses and contribute to a deeper understanding of brain structure and function in Saudi individuals.
Conceptualization, J.A., K.M. and H.T.; methodology, N.A.; software, N.A.; validation, J.A. and L.E.; formal analysis, N.A., K.M., L.E. and H.T.; investigation, N.A. and L.E.; resources, N.A., K.M., L.E., J.A. and H.T.; data curation, J.A. and H.T.; writing—original draft preparation, N.A.; writing—review and editing, K.M., L.E., H.T. and J.A.; visualization, N.A.; supervision, K.M. and L.E.; project administration, H.T. and J.A.; funding acquisition, K.M. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
The data presented in this study is not publicly available due to privacy.
The authors gratefully acknowledge King Abdulaziz University Hospital for providing the MRI scans used in this study. The scans were acquired in NIfTI file format.
The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
S | Raw scans |
I | Preprocessed images set |
N | Number of images |
| Random image from the set I |
| Aligned I |
| Weighted center image |
| Unweighted center image |
T | Template |
t | Iteration counter |
| Template at iteration t |
| Difference between |
| Template from patch-based estimation |
| Template from voxel-based averaging |
F | Features |
| Principal components |
| Covariance matrix |
| Inverse of |
W | Image similarity weights |
| Normalized image similarity weights |
| Displacement from |
| Weighted displacement for |
| Displacement from |
| Set of template patches |
| Set of |
D | Euclidean distances |
h | Gaussian bandwidths |
w | |
| |
V | Set of nonzero voxel indices |
K | Number of nonzero voxel indices |
v | Voxel index |
* | Element-wise multiplication |
| Weighted Displacement |
| Average Gradient Magnitude |
| Normalized Michelson Contrast |
| Kullback-Leibler Divergence |
Maximum intensity in WM | |
| Minimum intensity in GM |
| The L2-norm |
| The absolute value |
x | Intensity or voxel value |
M | Number of template voxels |
| Probability of x in template distribution |
| Probability of x in |
| Jacobian matrix at v |
| Determinant of |
∇ | Gradient operator |
X | Total computational time |
C | Processing cores |
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 Visualization of brain templates created from various imaging modalities: (a) T1-weighted MRI, (b) T2-weighted MRI, (c) PD, (d) PET, (e) FLAIR, and (f) DTI. The grayscale contrast in (a–e) reflects the intrinsic contrast characteristics of the respective imaging modality. The colors in (f) represent diffusion directionality, visualized as a color-coded map.
Figure 2 Workflow of the framework employed to construct a structural brain template for the subset of healthy Saudi adult females. It starts with the raw MRI scans on the leftmost side (data description is provided in
Figure 3 Components of a NIfTI file: (a) A simplified illustration of the
Figure 4 Visualization of two scans from the dataset in their native space (a,b) and after spatial normalization to the MNI152 template space (c,d).
Figure 5 Visualization of a spatially normalized image corrupted by a bias field (a). (b) shows the estimated bias field. (c) displays the image after correction using the N4 algorithm.
Figure 6 Visualization of a noisy image (one of the bias field corrected images) (a), estimated noise (b), and denoised image (c).
Figure 7 Visualization of the whole head with the estimated brain mask overlay of one of the denoised images (a), the excluded non-brain regions (b), and the extracted brain (c).
Figure 8 Visualization of the unnormalized intensity histograms (a) and the corresponding normalized intensity histograms (b) for the brain-extracted images.
Figure 9 Illustration of the weighted SMC: (a) shows the weighted individual displacements, where the asterisk (*) denotes element-wise multiplication. These weighted displacements are then summed and applied to
Figure 10 Visualization of the center reached by applying displacement, illustrated as a grid overlay, to a random image shown in (a), once with similarity weights (b) and once without (c).
Figure 11 Comparison of symmetric and asymmetric center (
Figure 12 Illustration of the computed distances D between the template patch and each corresponding patch in the aligned images (a), where the patches are represented as gray 3D matrices surrounding a voxel, shown as a small red cube. Panel (b) shows the exponential function used to compute the weights w, where h is the median of the distances D and serves as a dynamic parameter controlling the decay rate of the function.
Figure 13 Visualization of the templates obtained from patch-based estimation (a) and voxel-based averaging (b).
Figure 14 Visualization of the gradient magnitude of the templates obtained from patch-based estimation (a) and voxel-based averaging (b).
Figure 15 Visualization of pure WM and GM regions in templates generated using patch-based estimation (a) and voxel-based averaging (b).
Figure 16 Distribution of
Figure 17 Distribution of
Image similarity weights as prior knowledge for the template construction.
Image | Weight |
---|---|
1 | 0.143001 |
2 | 0.143001 |
3 | 0.138172 |
4 | 0.145285 |
5 | 0.135802 |
6 | 0.147039 |
7 | 0.147702 |
Comparison of
Template | Sum | Average |
---|---|---|
| 33,566,831.060 | 3.935 |
| 33,735,950.577 | 3.955 |
Comparison of the intensity quality for
Template | | | |
---|---|---|---|
| 60.958 | 0.418 | 0.057 |
| 55.175 | 0.393 | 0.368 |
† The median value of the
Median
Template | |
---|---|
BT-HSAF | −0.02368 |
US200 | −0.02557 |
CN200 | −0.02513 |
IBA100 | −0.02413 |
Comparison of evaluation aspects between the current study and previous work.
Evaluation Aspect | Current Study | Previous Work |
---|---|---|
Unbiasedness | Used voxel-wise Weighted template ( | Parvathaneni et al. [ Weighted cortical averages reduced bias and improved stability |
Image Quality | Patch-based template ( Higher | Coupé et al. [ Yang et al. [ |
Usability | BT-HSAF had Compared with IBA100, CN200, US200 | Sivaswamy et al. [ Yang et al. [ |
1. Bear, M.; Connors, B.; Paradiso, M.A. Neuroscience: Exploring the Brain, Enhanced Edition: Exploring the Brain; Jones & Bartlett Learning: Burlington, MA, USA, 2020.
2. Squire, L.R.; Bloom, F.E.; Spitzer, N.C.; Gage, F.H.; Albright, T.D. Encyclopedia of Neuroscience; Academic Press: Cambridge, MA, USA, 2009.
3. Ajtai, B.; Masdeu, J.C.; Lindzen, E. Structural Imaging using Magnetic Resonance Imaging and Computed Tomography. Bradley’s Neurology in Clinical Practice; Daroff, R.B.; Jankovic, J.; Mazziotta, J.C.; Pomeroy, S.L. Elsevier: Amsterdam, The Netherlands, 2016; pp. 411-458.e7.
4. Meyer, P.T.; Rijntjes, M.; Hellwig, S.; Klöppel, S.; Weiller, C. Functional Neuroimaging: Functional Magnetic Resonance Imaging, Positron Emission Tomography, and Single-Photon Emission Computed Tomography. Bradley’s Neurology in Clinical Practice; Daroff, R.B.; Jankovic, J.; Mazziotta, J.C.; Pomeroy, S.L. Elsevier: Amsterdam, The Netherlands, 2016; pp. 486-503.e5.
5. Toga, A.; Mazziotta, J. Brain Mapping: The Methods; Academic Press: Cambridge, MA, USA, 2002.
6. Evans, A.C.; Janke, A.L.; Collins, D.L.; Baillet, S. Brain templates and atlases. Neuroimage; 2012; 62, pp. 911-922. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2012.01.024]
7. Mandal, P.K.; Mahajan, R.; Dinov, I.D. Structural Brain Atlases: Design, Rationale, and Applications in Normal and Pathological Cohorts. J. Alzheimer’s Dis.; 2012; 31, pp. S169-S188. [DOI: https://dx.doi.org/10.3233/JAD-2012-120412]
8. Ciric, R.; Thompson, W.H.; Lorenz, R.; Goncalves, M.; MacNicol, E.E.; Markiewicz, C.J.; Halchenko, Y.O.; Ghosh, S.S.; Gorgolewski, K.J.; Poldrack, R.A.
9. Team, C.P.P. Chinese Brain PET Template. 2025; Available online: https://www.nitrc.org/projects/cnpet/ (accessed on 4 May 2025).
10. Team, F. Oxford-MM Templates. 2025; Available online: https://pages.fmrib.ox.ac.uk/fsl/oxford-mm-templates/ (accessed on 4 May 2025).
11. Talairach, J. Co-Planar Stereotaxic Atlas of the Human Brain-3-Dimensional Proportional System: An Approach to Cerebral Imaging; Thieme Medical Publishers: Stuttgart, New York, 1988.
12. Evans, A.C.; Collins, D.L.; Mills, S.; Brown, E.D.; Kelly, R.L.; Peters, T.M. 3D statistical neuroanatomical models from 305 MRI volumes. Proceedings of the 1993 IEEE Conference Record Nuclear Science Symposium and Medical Imaging Conference; San Francisco, CA, USA, 31 October–6 November 1993; IEEE: Piscataway, NJ, USA, 1993; pp. 1813-1817.
13. Mazziotta, J.; Toga, A.; Evans, A.; Fox, P.; Lancaster, J.; Zilles, K.; Woods, R.; Paus, T.; Simpson, G.; Pike, B.
14. Mazziotta, J.; Toga, A.; Evans, A.; Fox, P.; Lancaster, J.; Zilles, K.; Woods, R.; Paus, T.; Simpson, G.; Pike, B.
15. Tang, Y.; Hojatkashani, C.; Dinov, I.D.; Sun, B.; Fan, L.; Lin, X.; Qi, H.; Hua, X.; Liu, S.; Toga, A.W. The construction of a Chinese MRI brain atlas: A morphometric comparison study between Chinese and Caucasian cohorts. Neuroimage; 2010; 51, pp. 33-41. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2010.01.111] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/20152910]
16. Bhalerao, G.V.; Parlikar, R.; Agrawal, R.; Shivakumar, V.; Kalmady, S.V.; Rao, N.P.; Agarwal, S.M.; Narayanaswamy, J.C.; Reddy, Y.J.; Venkatasubramanian, G. Construction of population-specific Indian MRI brain template: Morphometric comparison with Chinese and Caucasian templates. Asian J. Psychiatry; 2018; 35, pp. 93-100. [DOI: https://dx.doi.org/10.1016/j.ajp.2018.05.014]
17. Pai, P.P.; Mandal, P.K.; Punjabi, K.; Shukla, D.; Goel, A.; Joon, S.; Roy, S.; Sandal, K.; Mishra, R.; Lahoti, R. BRAHMA: Population specific T1, T2, and FLAIR weighted brain templates and their impact in structural and functional imaging studies. Magn. Reson. Imaging; 2020; 70, pp. 5-21. [DOI: https://dx.doi.org/10.1016/j.mri.2019.12.009]
18. Yang, G.; Zhou, S.; Bozek, J.; Dong, H.M.; Han, M.; Zuo, X.N.; Liu, H.; Gao, J.H. Sample sizes and population differences in brain template construction. NeuroImage; 2020; 206, 116318. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2019.116318]
19. Wang, H.; Tian, Y.; Liu, Y.; Chen, Z.; Zhai, H.; Zhuang, M.; Zhang, N.; Jiang, Y.; Gao, Y.; Feng, H.
20. Sivaswamy, J.; Thottupattu, A.J.; Mehta, R.; Sheelakumari, R.; Kesavadas, C. Construction of Indian human brain atlas. Neurol. India; 2019; 67, pp. 229-234. [DOI: https://dx.doi.org/10.4103/0028-3886.253639]
21. Liang, P.; Shi, L.; Chen, N.; Luo, Y.; Wang, X.; Liu, K.; Mok, V.C.; Chu, W.C.; Wang, D.; Li, K. Construction of brain atlases based on a multi-center MRI dataset of 2020 Chinese adults. Sci. Rep.; 2015; 5, 18216. [DOI: https://dx.doi.org/10.1038/srep18216]
22. Xie, W.; Richards, J.E.; Lei, D.; Zhu, H.; Lee, K.; Gong, Q. The construction of MRI brain/head templates for Chinese children from 7 to 16 years of age. Dev. Cogn. Neurosci.; 2015; 15, pp. 94-105. [DOI: https://dx.doi.org/10.1016/j.dcn.2015.08.008] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26343862]
23. Lee, H.; Yoo, B.I.; Han, J.W.; Lee, J.J.; Lee, E.Y.; Kim, J.H.; Kim, K.W. Construction and validation of brain MRI templates from a Korean normal elderly population. Psychiatry Investig.; 2016; 13, pp. 135-145. [DOI: https://dx.doi.org/10.4306/pi.2016.13.1.135] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26766956]
24. Holla, B.; Taylor, P.A.; Glen, D.R.; Lee, J.A.; Vaidya, N.; Mehta, U.M.; Venkatasubramanian, G.; Pal, P.K.; Saini, J.; Rao, N.P.
25. Arthofer, C.; Smith, S.M.; Douaud, G.; Bartsch, A.; Alfaro-Almagro, F.; Andersson, J.; Lange, F.J. Internally-consistent and fully-unbiased multimodal MRI brain template construction from UK Biobank: Oxford-MM. Imaging Neurosci.; 2024; 2, pp. 1-27. [DOI: https://dx.doi.org/10.1162/imag_a_00361]
26. Geng, X.; Chan, P.H.; Lam, H.S.; Chu, W.C.; Wong, P.C. Brain templates for Chinese babies from newborn to three months of age. NeuroImage; 2024; 289, 120536. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2024.120536]
27. Feng, L.; Li, H.; Oishi, K.; Mishra, V.; Song, L.; Peng, Q.; Ouyang, M.; Wang, J.; Slinger, M.; Jeon, T.
28. Jae, S.L.; Dong, S.L.; Kim, J.; Yu, K.K.; Kang, E.; Kang, H.; Keon, W.K.; Jong, M.L.; Kim, J.J.; Park, H.J.
29. Xing, W.; Nan, C.; ZhenTao, Z.; Rong, X.; Luo, J.; Zhuo, Y.; DingGang, S.; KunCheng, L. Probabilistic MRI Brain Anatomical Atlases Based on 1000 Chinese Subjects. PLoS ONE; 2013; 8, e50939. [DOI: https://dx.doi.org/10.1371/journal.pone.0050939]
30. Zhao, T.; Liao, X.; Fonov, V.S.; Wang, Q.; Men, W.; Wang, Y.; Qin, S.; Tan, S.; Gao, J.H.; Evans, A.
31. Rueckert, D.; Frangi, A.; Schnabel, J. Automatic construction of 3-D statistical deformation models of the brain using nonrigid registration. IEEE Trans. Med. Imaging; 2003; 22, pp. 1014-1025. [DOI: https://dx.doi.org/10.1109/TMI.2003.815865]
32. Jongen, C.; Pluim, J.P.; Nederkoorn, P.J.; Viergever, M.A.; Niessen, W.J. Construction and evaluation of an average CT brain image for inter-subject registration. Comput. Biol. Med.; 2004; 34, pp. 647-662. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2003.10.003]
33. Joshi, S.; Davis, B.; Jomier, M.; Gerig, G. Unbiased diffeomorphic atlas construction for computational anatomy. NeuroImage; 2004; 23, pp. S151-S160. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2004.07.068] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/15501084]
34. Christensen, G.E.; Johnson, H.J.; Vannier, M.W. Synthesizing average 3D anatomical shapes. NeuroImage; 2006; 32, pp. 146-158. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2006.03.018] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16697223]
35. Noblet, V.; Heinrich, C.; Heitz, F.; Armspach, J.P. Symmetric Nonrigid Image Registration: Application to Average Brain Templates Construction. Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2008; New York, NY, USA, 6–10 September 2008; Metaxas, D.; Axel, L.; Fichtinger, G.; Székely, G. Springer: Berlin/Heidelberg, Germany, 2008; pp. 897-904.
36. Avants, B.B.; Yushkevich, P.; Pluta, J.; Minkoff, D.; Korczykowski, M.; Detre, J.; Gee, J.C. The optimal template effect in hippocampus studies of diseased populations. NeuroImage; 2010; 49, pp. 2457-2466. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2009.09.062] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/19818860]
37. Coupé, P.; Fonov, V.; Manjón, J.V.; Collins, L.D. Template Construction using a Patch-based Robust Estimator. Proceedings of the Organization for Human Brain Mapping 2010 Annual Meeting; Barcelona, Spain, 6–10 June 2010.
38. Fonov, V.; Evans, A.C.; Botteron, K.; Almli, C.R.; McKinstry, R.C.; Collins, D.L. Unbiased average age-appropriate atlases for pediatric studies. NeuroImage; 2011; 54, pp. 313-327. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2010.07.033]
39. Guimond, A.; Meunier, J.; Thirion, J.P. Automatic computation of average brain models. Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI’98; Cambridge, MA, USA, 11–13 October 1998; Wells, W.M.; Colchester, A.; Delp, S. Springer: Berlin/Heidelberg, Germany, 1998; pp. 631-640.
40. Guimond, A.; Roche, A.; Ayache, N.; Meunier, J. Three-dimensional multimodal brain warping using the Demons algorithm and adaptive intensity corrections. IEEE Trans. Med. Imaging; 2001; 20, pp. 58-69. [DOI: https://dx.doi.org/10.1109/42.906425]
41. Miller, M.; Banerjee, A.; Christensen, G.; Joshi, S.; Khaneja, N.; Grenander, U.; Matejic, L. Statistical methods in computational anatomy. Stat. Methods Med. Res.; 1997; 6, pp. 267-299. [DOI: https://dx.doi.org/10.1177/096228029700600305]
42. Collins, D.L.; Neelin, P.; Peters, T.M.; Evans, A.C. Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. J. Comput. Assist. Tomogr.; 1994; 18, pp. 192-205. [DOI: https://dx.doi.org/10.1097/00004728-199403000-00005]
43. Zhang, Y.; Zhang, J.; Hsu, J.; Oishi, K.; Faria, A.V.; Albert, M.; Miller, M.I.; Mori, S. Evaluation of group-specific, whole-brain atlas generation using Volume-based Template Estimation (VTE): Application to normal and Alzheimer’s populations. NeuroImage; 2014; 84, pp. 406-419. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2013.09.011] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24051356]
44. Yang, Z.; Chen, G.; Shen, D.; Yap, P.T. Robust fusion of diffusion MRI data for template construction. Sci. Rep.; 2017; 7, 12950. [DOI: https://dx.doi.org/10.1038/s41598-017-13247-w]
45. Comaniciu, D.; Meer, P. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell.; 2002; 24, pp. 603-619. [DOI: https://dx.doi.org/10.1109/34.1000236]
46. Schuh, A.; Makropoulos, A.; Robinson, E.C.; Cordero-Grande, L.; Hughes, E.; Hutter, J.; Price, A.N.; Murgasova, M.; Teixeira, R.P.A.G.; Tusor, N.
47. Parvathaneni, P.; Lyu, I.; Huo, Y.; Blaber, J.; Hainline, A.E.; Kang, H.; Woodward, N.D.; Landman, B.A. Constructing statistically unbiased cortical surface templates using feature-space covariance. Proceedings of the Medical Imaging 2018: Image Processing; Houston, TX, USA, 10–15 February 2018; Angelini, E.D.; Landman, B.A. International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2018; Volume 10574, 1057406. [DOI: https://dx.doi.org/10.1117/12.2293641]
48. Dalca, A.; Rakic, M.; Guttag, J.; Sabuncu, M. Learning Conditional Deformable Templates with Convolutional Networks. Proceedings of the Advances in Neural Information Processing Systems 32; Vancouver, BC, Canada, 8–14 December 2019; Wallach, H.; Larochelle, H.; Beygelzimer, A.; d’Alché-Buc, F.; Fox, E.; Garnett, R. Curran Associates, Inc.: Red Hook, NY, USA, 2019; Volume 32.
49. Lecun, Y. THE MNIST DATABASE of Handwritten Digits. 1998; Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 26 April 2025).
50. Jongejan, J.; Rowley, H.; Kawashima, T.; Kim, J.; Fox-Gieg, N. The quick, draw!-ai experiment. Mt. View, CA, Accessed Feb; 2016; 17, 4.
51. Ridwan, A.R.; Niaz, M.R.; Wu, Y.; Qi, X.; Zhang, S.; Kontzialis, M.; Javierre-Petit, C.; Tazwar, M.; Initiative, A.D.N.; Bennett, D.A.
52. Guimond, A.; Meunier, J.; Thirion, J.P. Average Brain Models: A Convergence Study. Comput. Vis. Image Underst.; 2000; 77, pp. 192-210. [DOI: https://dx.doi.org/10.1006/cviu.1999.0815]
53. Wang, Y.; Jiang, F.; Liu, Y. Reference-free brain template construction with population symmetric registration. Med. Biol. Eng. Comput.; 2020; 58, pp. 2083-2093. [DOI: https://dx.doi.org/10.1007/s11517-020-02226-5]
54. Gu, D.; Shi, F.; Hua, R.; Wei, Y.; Li, Y.; Zhu, J.; Zhang, W.; Zhang, H.; Yang, Q.; Huang, P.
55. Gu, D.; Cao, X.; Ma, S.; Chen, L.; Liu, G.; Shen, D.; Xue, Z. Pair-Wise and Group-Wise Deformation Consistency in Deep Registration Network. Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2020; Lima, Peru, 4–8 October 2020; Martel, A.L.; Abolmaesumi, P.; Stoyanov, D.; Mateus, D.; Zuluaga, M.A.; Zhou, S.K.; Racoceanu, D.; Joskowicz, L. Springer: Cham, Switzerland, 2020; pp. 171-180.
56. Miolane, N.; Holmes, S.; Pennec, X. Topologically Constrained Template Estimation via Morse–Smale Complexes Controls Its Statistical Consistency. SIAM J. Appl. Algebra Geom.; 2018; 2, pp. 348-375. [DOI: https://dx.doi.org/10.1137/17M1129222]
57. Google Colab—colab.research.google.com. Available online: https://colab.research.google.com/ (accessed on 20 February 2025).
58. Lancaster, J.L.; Fox, P.T. Talairach space as a tool for intersubject standardization in the brain. Handbook of Medical Imaging; Academic Press: Cambridge, MA, USA, 2000; pp. 555-567.
59. Fonov, V.S.; Evans, A.C.; McKinstry, R.C.; Almli, C.R.; Collins, D. Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. NeuroImage; 2009; 47, S102. [DOI: https://dx.doi.org/10.1016/S1053-8119(09)70884-5]
60. Hawkes, D.; Barratt, D.; Carter, T.; McClelland, J.; Crum, B. Nonrigid Registration. Image-Guided Interventions; Springer: Boston, MA, USA, 2008; pp. 193-218. [DOI: https://dx.doi.org/10.1007/978-0-387-73858-1_7]
61. Jenkinson, M.; Smith, S. A global optimisation method for robust affine registration of brain images. Med. Image Anal.; 2001; 5, pp. 143-156. [DOI: https://dx.doi.org/10.1016/S1361-8415(01)00036-6] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/11516708]
62. Jenkinson, M.; Bannister, P.; Brady, M.; Smith, S. Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage; 2002; 17, pp. 825-841. [DOI: https://dx.doi.org/10.1006/nimg.2002.1132]
63. Greve, D.N.; Fischl, B. Accurate and robust brain image alignment using boundary-based registration. Neuroimage; 2009; 48, pp. 63-72. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2009.06.060] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/19573611]
64. McRobbie, D.W.; Moore, E.A.; Graves, M.J.; Prince, M.R. Improving Your Image: How to Avoid Artefacts. MRI from Picture to Proton; Cambridge University Press: Cambridge, UK, 2017; pp. 81-101.
65. Juntu, J.; Sijbers, J.; Van Dyck, D.; Gielen, J. Bias field correction for MRI images. Proceedings of the Computer Recognition Systems: Proceedings of the 4th International Conference on Computer Recognition Systems CORES’05; Rydzyna Castle, Poland, 22–25 May 2005; Kurzyński, M.; Puchała, E.; Woźniak, M.; żołnierek, A. Springer: Berlin/Heidelberg, Germany, 2005; pp. 543-551.
66. Tustison, N.J.; Avants, B.B.; Cook, P.A.; Zheng, Y.; Egan, A.; Yushkevich, P.A.; Gee, J.C. N4ITK: Improved N3 Bias Correction. IEEE Trans. Med. Imaging; 2010; 29, pp. 1310-1320. [DOI: https://dx.doi.org/10.1109/TMI.2010.2046908]
67. Tustison, N.; Gee, J. N4ITK: Nick’s N3 ITK implementation for MRI bias field correction. Insight J.; 2009; 29, pp. 1310-1320. [DOI: https://dx.doi.org/10.54294/jculxw]
68. Constantinides, C. Signal, Noise, Resolution, and Image Contrast. Magnetic Resonance Imaging: The Basics; CRC Press: Boca Raton, FL, USA, London, UK, New York, NY, USA, 2016; Chapter 9 pp. 103-114.
69. Chaudhari, A. Denoising for Magnetic Resonance Imaging; Stanford University: Stanford, CA, USA, 2016.
70. Moreno López, M.; Frederick, J.M.; Ventura, J. Evaluation of MRI denoising methods using unsupervised learning. Front. Artif. Intell.; 2021; 4, 642731. [DOI: https://dx.doi.org/10.3389/frai.2021.642731]
71. Mäkinen, Y.; Azzari, L.; Foi, A. Collaborative filtering of correlated noise: Exact transform-domain variance for improved shrinkage and patch matching. IEEE Trans. Image Process.; 2020; 29, pp. 8339-8354. [DOI: https://dx.doi.org/10.1109/TIP.2020.3014721]
72. Maggioni, M.; Katkovnik, V.; Egiazarian, K.; Foi, A. Nonlocal Transform-Domain Filter for Volumetric Data Denoising and Reconstruction. IEEE Trans. Image Process.; 2013; 22, pp. 119-133. [DOI: https://dx.doi.org/10.1109/TIP.2012.2210725]
73. Mäkinen, Y.; Marchesini, S.; Foi, A. Ring artifact and Poisson noise attenuation via volumetric multiscale nonlocal collaborative filtering of spatially correlated noise. J. Synchrotron Radiat.; 2022; 29, pp. 829-842. [DOI: https://dx.doi.org/10.1107/S1600577522002739]
74. Leung, K.K.; Barnes, J.; Modat, M.; Ridgway, G.R.; Bartlett, J.W.; Fox, N.C.; Ourselin, S. Brain MAPS: An automated, accurate and robust brain extraction technique using a template library. NeuroImage; 2011; 55, pp. 1091-1108. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2010.12.067] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21195780]
75. Fennema-Notestine, C.; Ozyurt, I.B.; Clark, C.P.; Morris, S.; Bischoff-Grethe, A.; Bondi, M.W.; Jernigan, T.L.; Fischl, B.; Segonne, F.; Shattuck, D.W.
76. Kalavathi, P.; Prasath, V.S. Methods on skull stripping of MRI head scan images—A review. J. Digit. Imaging; 2016; 29, pp. 365-379. [DOI: https://dx.doi.org/10.1007/s10278-015-9847-8]
77. Chaurasia, A.; Culurciello, E. LinkNet: Exploiting encoder representations for efficient semantic segmentation. Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP); St. Petersburg, FL, USA, 10–13 December 2017; pp. 1-4. [DOI: https://dx.doi.org/10.1109/vcip.2017.8305148]
78. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015; Munich, Germany, 5–9 October 2015; Navab, N.; Hornegger, J.; Wells, W.M.; Frangi, A.F. Springer: Cham, Switzerland, 2015; pp. 234-241.
79. Fisch, L.; Zumdick, S.; Barkhau, C.; Emden, D.; Ernsting, J.; Leenings, R.; Sarink, K.; Winter, N.R.; Risse, B.; Dannlowski, U.
80. Weinreb, J.; Redman, H. Sources Of Contrast And Pulse Sequences. Magnetic Resonance Imaging of the Body: Advanced Exercises in Diagnostic Radiology Series; Saunders: Philadelphia, PA, USA, 1987; pp. 12-16.
81. Carré, A.; Klausner, G.; Edjlali, M.; Lerousseau, M.; Briend-Diop, J.; Sun, R.; Ammari, S.; Reuzé, S.; Alvarez Andres, E.; Estienne, T.
82. Reinhold, J.C.; Dewey, B.E.; Carass, A.; Prince, J.L. Evaluating the impact of intensity normalization on MR image synthesis. Proceedings of the Medical Imaging 2019: Image Processing; San Diego, CA, USA, 16–21 February 2019; Angelini, E.D.; Landman, B.A. SPIE: Bellingham, WA, USA, 2019; [DOI: https://dx.doi.org/10.1117/12.2513089]
83. Nyúl, L.G.; Udupa, J.K.; Zhang, X. New variants of a method of MRI scale standardization. IEEE Trans. Med. Imaging; 2000; 19, pp. 143-150. [DOI: https://dx.doi.org/10.1109/42.836373]
84. Van Griethuysen, J.J.; Fedorov, A.; Parmar, C.; Hosny, A.; Aucoin, N.; Narayan, V.; Beets-Tan, R.G.; Fillion-Robin, J.C.; Pieper, S.; Aerts, H.J. Computational radiomics system to decode the radiographic phenotype. Cancer Res.; 2017; 77, pp. e104-e107. [DOI: https://dx.doi.org/10.1158/0008-5472.CAN-17-0339]
85. Olivieri, A.C. Principal Component Analysis. Introduction to Multivariate Calibration: A Practical Approach; Springer International Publishing: Cham, Switzerland, 2018; pp. 57-71. [DOI: https://dx.doi.org/10.1007/978-3-319-97097-4_4]
86. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications; 2nd ed. CMS Books in Mathematics Originally published by Wiley-Interscience, 1974 Springer: New York, NY, USA, 2003; pp. 1-5. [DOI: https://dx.doi.org/10.1007/b97366]
87. Advanced Normalization Tools. Available online: https://stnava.github.io/ANTs/ (accessed on 20 February 2025).
88. Avants, B.B.; Epstein, C.L.; Grossman, M.; Gee, J.C. Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal.; 2008; 12, pp. 26-41. [DOI: https://dx.doi.org/10.1016/j.media.2007.06.004] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17659998]
89. Avants, B.B.; Tustison, N.; Johnson, H. Advanced Normalization Tools (ANTS) Release 2.X. 2014; Available online: https://gaetanbelhomme.files.wordpress.com/2016/08/ants2.pdf (accessed on 20 February 2025).
90. Marquart, G.D.; Tabor, K.M.; Horstick, E.J.; Brown, M.; Geoca, A.K.; Polys, N.F.; Nogare, D.D.; Burgess, H.A. High-precision registration between zebrafish brain atlases using symmetric diffeomorphic normalization. GigaScience; 2017; 6, gix056. [DOI: https://dx.doi.org/10.1093/gigascience/gix056]
91. Coupé, P.; Yger, P.; Prima, S.; Hellier, P.; Kervrann, C.; Barillot, C. An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images. IEEE Trans. Med. Imaging; 2008; 27, pp. 425-441. [DOI: https://dx.doi.org/10.1109/TMI.2007.906087]
92. Michelson, A.A. Studies in Optics; University of Chicago Press: Chicago, IL, USA, 1927.
93. Shattuck, D.W.; Sandor-Leahy, S.R.; Schaper, K.A.; Rottenberg, D.A.; Leahy, R.M. Magnetic Resonance Image Tissue Classification Using a Partial Volume Model. NeuroImage; 2001; 13, pp. 856-876. [DOI: https://dx.doi.org/10.1006/nimg.2000.0730] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/11304082]
94. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat.; 1951; 22, pp. 79-86. [DOI: https://dx.doi.org/10.1214/aoms/1177729694]
95. Fischl, B. FreeSurfer. NeuroImage; 2012; 62, pp. 774-781. [DOI: https://dx.doi.org/10.1016/j.neuroimage.2012.01.021] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22248573]
96. Laboratory, F.I. Statistical Parametric Mapping. Available online: https://www.fil.ion.ucl.ac.uk/spm/ (accessed on 20 June 2025).
97. Alfano, V.; Granato, G.; Mascolo, A.; Tortora, S.; Basso, L.; Farriciello, A.; Coppola, P.; Manfredonia, M.; Toro, F.; Tarallo, A.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In brain mapping, structural templates derived from population-specific MRI scans are essential for normalizing individual brains into a common space. This normalization facilitates accurate group comparisons and statistical analyses. Although templates have been developed for various populations, none currently exist for the Saudi population. To our knowledge, this work introduces the first structural brain template constructed and evaluated from a homogeneous subset of T1-weighted MRI scans of 11 healthy Saudi female subjects aged 25 to 30. Our approach combines the symmetric model construction (SMC) method with a covariance-based weighting scheme to mitigate bias caused by over-represented anatomical features. To enhance the quality of the template, we employ a patch-based mean-shift intensity estimation method that improves image sharpness, contrast, and robustness to outliers. Additionally, we implement computational optimizations, including parallelization and vectorized operations, to increase processing efficiency. The resulting template exhibits high image quality, characterized by enhanced sharpness, improved tissue contrast, reduced sensitivity to outliers, and minimized anatomical bias. This Saudi-specific brain template addresses a critical gap in neuroimaging resources and lays a reliable foundation for future studies on brain structure and function in this population.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details





1 Department of Computer Science, College of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; [email protected]
2 Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Benha 13511, Egypt; [email protected]
3 Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia; [email protected]
4 The Neuroscience Research Unit, Faculty of Medicine, King Abdulaziz University, Jeddah 21589, Saudi Arabia; [email protected]