This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
1. Introduction
The simultaneous investigation of magnetic resonance (MR) images and the identification of anomalies existing in the sensitive region of the human brain rely heavily on MR image sequences from 1.5 T Siemen’s type scanners [1]. It is a common practise to combine many imaging techniques, such as CT, PET, and MRI, while attempting to determine an infected person’s location during an infection. The information in the merged image is more detailed, but it is also a lot more haphazard. There is no doubt that it provides anatomical structure information for the damaged part and also provides information on hydration, glucose, and brain metabolism. In spite of the fact that several photos are available, it is difficult to examine the disease’s depth and infectiousness. There are many reasons why an MRI image may have a high level of noise, such as a defective equipment or a problem while collecting data from the acquisition procedure. The stochastic nature of electromagnetic waves causes artefacts in images that are commonly referred to as photon noise. Electromagnetic waves include X-rays, visible light, and gamma rays. Optical aberrations, noise level, and optical setup are only a few of the variables that complicate improving the convolution model when dealing with images. The impulse response of the system is what drives the point spread function (PSF) used in the image modelling process. Denoising and superresolution technologies are used to remove the noise content from the photos, resulting in high-quality photographs. Using nonblind iterative algorithmic approaches like Richardson-Lucy (RL) and the alternating direction method of multipliers (ADMM), we compare how well they handle images using the PSF. After averaging as many individual point sources as possible, the feature extraction is conducted by performing a number of iterations in order to discover an accurate pixel element that retains a high-resolution image. With the help of the RL and ADMM superresolution methods, we are able to eliminate all noise from the PSF and achieve high-resolution images with crisp image edges. Nonlinear mapping data between the received image and the original image must be correctly identified. The depth of noise filtering operation has been acquired using the multilayer CNN implementation. Predicting the number of filtering layers present in the restoration image is done using a node matrix (artificial insertion matrix). The nonlinear mapping link between observed data and ground reality is understood using a deep learning strategy that incorporates fuzzy C-mean (FCM), convolutional neural networks (CNN), and recurrent neural networks (RNN). In the context of optimization, it falls under the category of not reaching the desired search goal of precise analysis of viral propagation and accurate target location detection through a single step. A heuristic algorithm-assisted two-stage optimization approach can be developed to assess the abnormalities in the brain detected by various MRI scanning modalities. Multiple structural layers have been used to improve the visual quality of the image and eliminate motion artefacts in multicontrast MR images containing various tumour complaints for the quality detection and evaluation of tumour tissues in the human brain region. The following are the specifics of the meeting: To accurately measure the tumour tissue’s size in sensitive areas of the human body, quality analysis approaches are explored in Section 2 in relation to recent advances to tumour cell research. Section 3 discusses the suggested LSTM-based RNN framework’s methods for reducing motion blur effectively and considerably improving anomaly detection. ConvLSTM layers are used to extract spatial and temporal features, which are then passed via a multistructure layer in the proposed framework for visual enhancing and enhancement. Information on the tumour’s infectious level and radiation depth in sensitive areas is provided in great detail by this report. Simulated patients are able to use a database of images collected straight from licenced hospitals. Other current approaches, such as those shown in Section 4, were used to compare the quality of the simulation results of the proposed framework and estimate the radiated region results. Section 5 concludes with a summary of the findings from the study of simulation results.
2. Related Works
This section contains a thorough review of the literature on medical image processing methodology and approaches. There have been numerous contributions to identifying reliable detection of infectious occupancy in the human body using both traditional and soft-computing- (SC-) based techniques. Previously, a standard traditional methodology recognized the anomalies section by using an image fusion method based on the discrete wavelet transform (DWT). It provides adequate MRI image preprocessing and removes nonreadable noisy components using spatial filtering, a technique similar to Gaussian filters [1]. However, the texture and color of the brain image were altered after using the image fusion approach. As a result, color contract issues are connected with this strategy. This topic is constrained by a visualization technique based on the shearlet transform (ST) and the maintenance of statistical image quality with little color distortion at various high pass frequencies [2]. Edge indexing (QAB/F) and mutual information (MI) are evaluated in this section, which may take a long time to compute the specific task. Then, in order to improve the method for analyzing human anomalies, a segmentation approach is used in which the entire region is divided into a number of tiny regions depending on a threshold value generated from a standard reference image. The adaptive threshold approach is employed for structural investigation of the liver region [3]. It achieved 96 percent accuracy in the segmentation process and, as a result, recognized the anomalies. In addition, graph cut and labelling algorithms are widely employed for their performance and accuracy [4]. However, capturing MRI images in varied positions in order to gain their intended structural content requires very human interactions. Later, they used a quasi-Monte Carlo method, which is a sort of nonlinear mapping, to focus just on a single expanding location (lung nodule) [5]. It contains a local adaptive method that collects feature extraction by mapping fuzzy sets and then extends into accessing the probability model included in the expectation-maximization (EM) algorithm [6, 7]. Based on cluster head selection, it performs clustering operation. However, the nonlinear adjustment of pixel intensity causes artefacts to develop and diminishes the image’s visual quality. It is handled by a possibility-based fuzzy technique that reduces noisy occupancy levels at the end of the segmentation process [8–11]. It increases the intensity of neighboring pixels, preserving the quality of the segmented portion. In order to improve the segmentation operation, the fuzzy C-means (FCM) algorithm is presented, which updates its membership function by assessing the fuzzy factor and kernel weigh metric [12–14]. As a result, an efficient segmentation based on a clustering technique was obtained. Recently, advanced machine learning-based segmentation has been established, in which a convolutional neural network structure is used for training data via back propagation from supervised and unsupervised methodologies [15]. Preprocessing time is minimized, and weight update is accomplished using a multilayer perceptron neural network [16]. Following this paradigm, a topologically based mapping procedure including a one-to-two stage SOM neural network is launched for quick trace out of the target region [17]. The entropy-gradient segmentation approach is used in this study to minimize the color segmentation problem, and a strong training phase is used to maintain the tradeoffs between tumour detection and classification [18]. In the case of a special scenario, such as tumour classification, a semisupervised method is applied, which performs feature mapping through a self-organizing approach, directing suitable modal analysis and producing improved segmentation results [19]. Because of its lower processing complexity, local window grid (
3. Methodology
Motion artefacts develop during MR scanning during the observation phase of brain cancer, and this research was aimed at decreasing them. By increasing the visual quality of motion-blurred images through registration, the proposed LSTM-based RNN architecture in this research accurately detects the tumour’s localization. A motion blur MR image sequence dataset should initially be developed using high-quality ATDC data samples from recognized hospitals. When trained with known data samples in a supervised manner, the suggested framework can identify the blurred region and eliminate artefacts effectively, allowing for better patient care. In addition, the irregular MR image sequences can forecast certain lost frames.
3.1. Two Different Datasets for Simulation Purpose
(1) ATDC dataset: an automated tumour diagnostic challenge (ATDC) dataset [29] was collected from authorized diagnosis centers for this investigation. Approximately 25 clinical brain MR-scanned image sequences from the ATDC 2020 dataset are taken with various tumour complaints such as primitive neuroectodermal, meningioma, and astrocytoma carcinoma. Each image sequence is divided into 5 to 20 frames each phase, with a proper slicing thickness of 6 to 9 mm and a quality resolution (spatial) of 1.56 mm2/pixel. Two groups are generated based on the depth and thickness of the infection, from which a suitable number of training and testing frames are extracted. That is, 13 scans are allocated for training sample extraction, while the remaining 12 scans are used for testing frames obtained from the prediction phase
(2) Tumour dataset: it is a dataset created by gathering sample data from independent patients with the assistance of standard and recognized hospitals. The Siemens Avanto 1.5 T MRI scanner was used to capture all of the received MR images. Each image sequence is sliced into a number of frames ranging from 3 to 9 per phase, with proper slicing thickness ranging from 4 to 6 mm and a quality resolution (spatial) of 1.67 mm2/pixel. Nearly, 20 slices are generated, 10 of which are allocated for fine tuning and the remaining 10 slices are reserved for testing frames
3.2. Motion Blurring
Consider applying the motion-free MR image (ground truth) as a sequence into the suggested LSTM-based RNN, where acquired MR images are controlled by ECG signals. Figure 1 shows a motion blurring MR image created by combining k-space sampled data points from different frames in a scanning session. In other words, the Fourier transform is used to sample an original MR image sequence into k-space data points (FT). Encoded data
[figure(s) omitted; refer to PDF]
3.3. Neural Network
The proposed LSTM-based RNN system focuses on extracting precise spatial and temporal features from image frames and enhancing the feature information through a multiscale layer for accurate prediction. As a result, the registration model improves the visual quality of the motion-blurred image, allowing for more precise detection of the tumour region. Figure 2 depicts the proposed LSTM-based RNN architecture, which includes an encoder-decoder module, an LSTM module, and a multiscale layer module. It has forward and backward LSTM branches that extract features and transmit more information to the convolution stage for better projected frame reconstruction [30]. The size of convolution kernels is employed in this study to extract spatial characteristics. The proposed framework LSTM layer can be stated as follows:
[figure(s) omitted; refer to PDF]
The proposed LSTM-based RNN includes numerous convolution layers for extracting spatial data from input sequences and temporal features computed from image frames with convolution kernels of varying sizes
3.4. Frame Interpolation
The proposed framework was trained using known data samples in a supervised fashion to detect the blur portion efficiently and successfully eliminate artefacts, resulting in a visual quality MR image for better patient treatment. Furthermore, it can detect missing frames from partial MT image sequences. That is, it can correctly forecast the pre and post frames from motion-blurred MR images and hence produce the smallest error difference. As a result, motion blurring is decreased and the visual quality of the MR-scanned image is maintained. The proposed LSTM-based RNN’s pseudocode is detailed further below.
Pseudocode 1: Motion-blurred MR image sequence.
1 Initiation of parameters:
2
3
4
5
6 Apply Gaussian filter for removal of artefact components using Equation (1)
Random selection of frames:
7 Set no of possible no. of slicing count per image phase
8 Calculate minimum slicing count and its volume slicing updating
9 for i=1 :
10 Compute:
11
12
13
14 get
15 If
16 then, Zero padding is initialize for 25% blur effects
17 else
18 Zero padding is not required
19 end for
20 Obtain
21 Using cropping function to isolate specific portion
22 end if
Pseudocode 2: Spatial and temporal feature extraction.
1 Initialize no of convolution block denoted as
2 for i=1 to
3 encode
4 Get the spatial features using Equations (2) to (6) (i.e.
5 Obtain local best
6 Continues check: if
7 Retain previous state value
8 else if
9 Again, update
10 To get
11 end if
12 end for
Pseudocode 3: Missing frame prediction.
1 Set
2 for i=1 to
3 Continues check: if
4 Then, further enhancement of feature extraction process parameter is required
5 else
6 Retain previous state value
7 end if
8 end for
9 Compute WD and PL
10 The error different is calculated by Equ. (7) and Equ. (8).
The zero padding interpolation process is involved in order to get more image blurriness in the MR scan sequence. Similarly, reverse operation is taken by decoder stage and same conventional kernel size of 3 is restricted. Thereby, the numbers of kernel size are 32. 64, 128, and 256, respectively.
4. Results and Discussion
This part investigates the qualitative and quantitative evaluation of motion artefact reduction in multicontrast MR images with various tumour complaints. The proposed LSTM-based RNN system is guided by two key considerations: (i) validating the effectiveness of forecasting frames before and after the evaluation phase and (ii) ensuring quality retrieval from motion-blurred MR images.
The proposed work focuses on extracting precise spatial and temporal features from image frames and enhancing the feature information with a multiscale layer for accurate prediction. As a result, the registration model improves the visual quality of the motion-blurred image, allowing for more precise detection of the tumour region. It includes three key modules: an encoder-decoder module, an LSTM module, and a multiscale layer module. The proposed framework’s output was compared to deep learning methods such as (i) deep cascade of convolution neural network (DC-CNN), (ii) single and multicontrast superresolution convolution neural network (SMSR-CNN), (iii) fusion multiscale information in convolution network (FMSI-CNN), and (iv) deep residual channel attention convolution network (DRCA-CNN) using key performance metrics such as mean squared error (MSE) (PSNR).
Table 1 displays the measured SSIM and PSNR values for hybrid combinations of the three modules. It is apparent that the LSTM module and multiscale layer module have the highest SSIM and PSNR scores.
Table 1
Effectiveness of proposed methods is compared with existing methods after taking averaged value from the ATDC dataset.
Schemes | Encoder-decoder | Conv LSTM | Multiscale | St | SSIM | PSNR | Accuracy |
DC-CNN | 0.7983 | 0.7956 | 26.967 | 0.9345 | |||
SMSR-CNN | 0.8726 | 0.9011 | 27.754 | 0.9453 | |||
FMSI-CNN | 0.9052 | 0.9415 | 26.863 | 0.9467 | |||
DRCA-CNN | 0.9078 | 0.9465 | 27.543 | 0.9671 | |||
LSTM-based RNN | 0.9756 | 0.9523 | 28.671 | 0.9824 |
For a better understanding of the progressive improvement achieved by the proposed LSTM-based RNN framework towards tumour region detection, the simulation output is comprehensively compared with (i) motion-blurred MR image and (ii) including both motion blurring plus mixed up of under sampled images. Figure 3 clearly shows that the suggested framework produced the smallest error difference between motion-free and motion-blurred MR images. That is, it can properly forecast the pre and post frames from motion-blurred MR images, resulting in a minimum error difference, as shown in the second and fourth columns of Figure 3. As a result, motion blurring is decreased and the visual quality of the MR-scanned image is maintained.
[figure(s) omitted; refer to PDF]
4.1. Convergence Behavior
When comparing a motion-free image to a blurred image, an error always happens. However, it must be thoroughly examined using two key factors: Wasserstein distance (WD) and perceptual loss (PL), where WD signifies the actual difference between the input image and the processed image frames. Similarly, PL depicts the variance of pixel values gathered at low, medium, and high spatial domain feature levels. If the assessed value is low, it means that the processed image contains few corrected faults and can efficiently detect the anomaly infection region. Figure 4 clearly displays how the corrected error decays exponentially with respect to the number of epochs. As a result, the proposed technique demonstrated stable performance when training the LSTM-based RNN.
[figure(s) omitted; refer to PDF]
4.2. Reduction of Motion Blurring Effects
The results of the proposed LSTM-based RNN are compared to four state-of-the-art methods: (i) deep cascade of convolution neural network (DC-CNN) [31], (ii) single and multicontrast superresolution convolution neural network (SMSR-CNN) [32], (iii) fusion multiscale information in convolution network (FMSI-CNN) [33], and (iv) deep residual channel attention convolution network (DRCA-CNN) [34, 35]. The SMSR-CNN employs a sensing approach to anticipate many frames without any time lag; the process is synchronized with each other for the next round of prediction. DC-CNN, FMSI-CNN, and DRCA-CNN generate single-tone superresolution images with loss in feature information acquisition.
Table 2 compares the mean and standard deviation values of the proposed framework’s prime parameters to those of existing techniques. As a result, the successful prediction of pre and post frames from motion-blurred MR images yields the least error difference.
Table 2
Statistical comparison of prime parameters on proposed framework with existing methods through mean and standard deviation values, respectively.
Schemes | Time (sec) | Memory (megabytes) | Sp | St | SSIM | PSNR |
DC-CNN | ||||||
SMSR-CNN | ||||||
FMSI-CNN | ||||||
DRCA-CNN | ||||||
LSTM-based RNN |
Figures 5 and 6 clearly show that the proposed LSTM-based RNN achieves the greatest SSIM and PSNR scores due to quality prediction of frames through k-space under sampling. Figure 6 compares the error outcomes of reconstructed frames from predicted modules of proposed LSTM-based RNN with DC-CNN, SMSR-CNN, FMSI-CNN, and DRCA-CNN approaches.
[figure(s) omitted; refer to PDF]
It is clear that the boundaries of reconstructed frames from other existing methods have lost some details on region textures, but the suggested framework effectively preserves the boundary information and so allows the radiologist to appropriately identify the anemology detection [36].
4.3. Missing Frame Estimation
Figure 7 depicts the frame prediction after the interpolation process has been included. The proposed LSTM-based RNN architecture effectively recognizes quiet features from motion-blurred MR scanning sequences and reconstructs frames based on sampling time instants, yielding accurate mimics of motion-free MR-scanned images [37]. It has transmitted the accurate identification of the affected region’s borders and reduced the frequency of false detections due to faulty tumour segmentation.
[figure(s) omitted; refer to PDF]
4.4. ATDC 2020 Tumour Dataset
A detailed examination of the ATDC 2020 dataset can be performed to discover the SSIM and PSNR that occur between the detected tumour location and its ground truth images. It is clear that the proposed LSTM-based RNN framework performed accurate tumour region recognition by improving the visual quality of segmented images for the maximum number of images available in the ATDC 2020 dataset. It is clearly shown in Figure 8 (C1-C4) for various image planes (axial, sagittal, and coronary). It depicts the difference in inaccuracy between the anticipated frame and the frame in the ground truth MR image without blur effects. The proposed LSTM-based RNN framework was trained using high and low brain images from the ATDC 2020 dataset. The experimental results show that the proposed framework is capable of effective and resilient frame prediction and reconstruction on motion-blurred MR-scanned images, which can be confirmed using ground truth images.
[figure(s) omitted; refer to PDF]
The LSTM-based RNN framework producing estimated frame output from motion-blurred image has maintained proper tradeoff between edema portion and tumour region which in turn induced healthy diagnosis of the almost all kinds of tumour complaints. It is clearly inferred in the fifth column of Figure 8 (C3–C4), respectively. SSIM
5. Conclusion
In this paper, a novel RNN framework based on long short-term memory (LSTM) is suggested to remove motion artefacts in dynamic multicontrast MR images using a registration model. It includes three key modules: an encoder-decoder module, an LSTM module, and a multiscale layer module. The proposed work focuses on extracting precise spatial and temporal features from image frames and enhancing the feature information with a multiscale layer for accurate prediction. As a result, it was able to locate the tumour region accurately by improving the visual quality of a motion-blurred image using a registration model. The simulation output is thoroughly compared to (i) a motion-blurred MR image and (ii) a mixture of motion-blurred and undersampled pictures. To gain a better understanding of the progressive improvement achieved by the proposed LSTM-based RNN architecture, it is being compared to existing approaches such as DC-CNN, SMSR-CNN, FMSI-CNN, and DRCA-CNN for tumour region detection. That is, it can correctly forecast the pre and post frames from motion-blurred MR pictures and hence produce the smallest error difference. As a result, it has achieved a high SSIM
[1] V. Bhavana, H. K. Krishnappa, "Multi-modality medical image fusion using discrete wavelet transform," Procedia Computer Science, vol. 70, pp. 625-631, DOI: 10.1016/j.procs.2015.10.057, 2015.
[2] B. Biswas, A. Chakrabarti, K. N. Dey, "Medical image fusion using regional statistics of shift-invariant shearlet domain," IEEE International Conference on Biomedical Engineering and Sciences,DOI: 10.1109/IECBES.2014.7047607, .
[3] Z. Ma, J. M. R. S. Tavares, R. N. Jorge, T. Mascarenhas, "A review of algorithms for medical image segmentation and their applications to the female pelvic cavity," Computer Methods in Biomechanics and Biomedical Engineering, vol. 13 no. 2, pp. 235-246, DOI: 10.1080/10255840903131878, 2010.
[4] M. G. Linguraru, W. J. Richbourg, J. Liu, J. M. Watt, V. Pamulapati, S. Wang, R. M. Summers, "Tumor burden analysis on computed tomography by automated liver and tumor segmentation," IEEE Transactions on Medical Imaging, vol. 31 no. 10, pp. 1965-1976, DOI: 10.1109/TMI.2012.2211887, 2012.
[5] T. Tong, R. Wolz, Z. Wang, Q. Gao, K. Misawa, M. Fujiwara, K. Mori, J. V. Hajnal, D. Rueckert, "Discriminative dictionary learning for abdominal multi-organ segmentation," Medical Image Analysis, vol. 23 no. 1, pp. 92-104, DOI: 10.1016/j.media.2015.04.015, 2015.
[6] X. Lu, J. Wu, X. Ren, B. Zhang, Y. Li, "The study and application of the improved region growing algorithm for liver segmentation," Optik, vol. 125 no. 9, pp. 2142-2147, DOI: 10.1016/j.ijleo.2013.10.049, 2014.
[7] J. Dehmeshki, H. Amin, M. Valdivieso, X. Ye, "Segmentation of pulmonary nodules in thoracic CT scans: a region growing approach," IEEE Transactions on Medical Imaging, vol. 27 no. 4, pp. 467-480, DOI: 10.1109/TMI.2007.907555, 2008.
[8] J. Song, C. Yang, L. Fan, K. Wang, F. Yang, S. Liu, J. Tian, "Lung lesion extraction using a toboggan based growing automatic segmentation approach," IEEE Transactions on Medical Imaging, vol. 35 no. 1, pp. 337-353, DOI: 10.1109/TMI.2015.2474119, 2016.
[9] S. Nasser, R. Alkhaldi, G. Vert, "A modified fuzzy K-means clustering using expectation maximization," IEEE International Conference on Fuzzy Systems, pp. 231-235, DOI: 10.1109/FUZZY.2006.1681719, .
[10] Y. Wang, L. Chen, "Multi-view fuzzy clustering with minimax optimization for effective clustering of data from multiple sources," Expert Systems with Applications, vol. 72, pp. 457-466, DOI: 10.1016/j.eswa.2016.10.006, 2017.
[11] J. Aparajeeta, P. K. Nanda, N. Das, "Modified possibilistic fuzzy C-means algorithms for segmentation of magnetic resonance image," Applied Soft Computing, vol. 41, pp. 104-119, DOI: 10.1016/j.asoc.2015.12.003, 2016.
[12] V. Maheswari, G. V. Uma, S. Viswanadha Raju, "Local directional maximum edge patterns for facial expression recognition," Journal of Ambient Intelligence and Humanized Computing, vol. 12 no. 5, pp. 4775-4783, DOI: 10.1007/s12652-020-01886-3, 2021.
[13] M. Berbar, "Hybrid methods for feature extraction for breast masses classification," Egyptian Informatics Journal, vol. 19 no. 1, pp. 63-73, DOI: 10.1016/j.eij.2017.08.001, 2018.
[14] Ş. Öztürka, B. Akdemir, "Application of feature extraction and classification methods for histopathological image using GLCM, LBP, LBGLCM, GLRLM and SFTA," Procedia Computer Science, vol. 132, pp. 40-46, DOI: 10.1016/j.procs.2018.05.057, 2018.
[15] N. Aggarwal, R. K. Agrawal, "First and second order statistics features for classification of magnetic resonance brain images," Journal of Signal and Information Processing, vol. 3 no. 2, pp. 146-153, DOI: 10.4236/jsip.2012.32019, 2012.
[16] V. U. Maheswari, G. V. Prasad, S. Viswanadha Raju, "Facial expression analysis using local directional stigma mean patterns and convolutional neural networks," International Journal of Knowledge-based and Intelligent Engineering Systems, vol. 25 no. 1, pp. 119-128, DOI: 10.3233/KES-210057, 2021.
[17] S. R. Kannan, R. Devi, S. Ramathilagam, K. Takezawa, "Effective FCM noise clustering algorithms in medical images," Computers in Biology and Medicine, vol. 43 no. 2, pp. 73-83, DOI: 10.1016/j.compbiomed.2012.10.002, 2013.
[18] M. Gong, Y. Liang, J. Shi, W. Ma, J. Ma, "Fuzzy C-means clustering with local information and kernel metric for image segmentation," IEEE Transactions on Image Processing, vol. 22 no. 2, pp. 573-584, DOI: 10.1109/TIP.2012.2219547, 2013.
[19] M. Egmont-Petersen, D. De Ridder, H. Handels, "Image processing with neural networks-a review," Pattern Recognition, vol. 35 no. 10, pp. 2279-2301, DOI: 10.1016/S0031-3203(01)00178-9, 2002.
[20] J. Kuruvilla, K. Gunavathi, "Lung cancer classification using neural networks for CT images," Computer Methods and Programs in Biomedicine, vol. 113 no. 1, pp. 202-209, DOI: 10.1016/j.cmpb.2013.10.011, 2014.
[21] H. Masoumi, A. Behrad, M. A. Pourmina, A. Roosta, "Automatic liver segmentation in MRI images using an iterative watershed algorithm and artificial neural network," Biomedical Signal Processing and Control, vol. 7 no. 5, pp. 429-437, DOI: 10.1016/j.bspc.2012.01.002, 2012.
[22] M. N. Ahmed, S. M. Yamany, N. Mohamed, A. A. Farag, T. Moriarty, "A modified fuzzy C-means algorithm for bias field estimation and segmentation of MRI data," IEEE Transactions on Medical Imaging, vol. 21 no. 3, pp. 193-199, DOI: 10.1109/42.996338, 2002.
[23] J. Jiang, P. Trundle, J. Ren, "Medical image analysis with artificial neural networks," Computerized Medical Imaging and Graphics, vol. 34 no. 8, pp. 617-631, DOI: 10.1016/j.compmedimag.2010.07.003, 2010.
[24] X. Yunfeng, P. Fan, L. Yuan, "A simple and efficient artificial bee colony algorithm," Mathematical Problems in Engineering, vol. 2013,DOI: 10.1155/2013/526315, 2013.
[25] G. A. Bello, T. J. Dawes, J. Duan, C. Biffi, A. De Marvao, L. S. Howard, J. S. R. Gibbs, M. R. Wilkins, S. A. Cook, D. Rueckert, D. P. O’regan, "Deep-learning cardiac motion analysis for human survival prediction," Nature Machine Intelligence, vol. 1 no. 2, pp. 95-104, DOI: 10.1038/s42256-019-0019-2, 2019.
[26] J. Johnson, A. Alahi, L. Fei-Fei, "Perceptual losses for real-time style transfer and super-resolution," Computer Vision – ECCV 2016. ECCV 2016, vol. 9906, pp. 694-711, DOI: 10.1007/978-3-319-46475-6_43, 2016.
[27] K. Simonyan, A. Zisserman, "Very deep convolutional networks for large-scale image recognition," . 2014, https://arxiv.org/abs/1409.1556
[28] S. Shitharth, G. B. Mohammad, K. Ramana, V. Bhaskar, Prediction of COVID-19 Wide Spread in India using Time Series Forecasting Techniques, 2021.
[29] O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, X. Yang, P. A. Heng, I. Cetin, K. Lekadir, O. Camara, M. A. G. Ballester, G. Sanroma, "Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved?," IEEE Transactions on Medical Imaging, vol. 37 no. 11, pp. 2514-2525, DOI: 10.1109/TMI.2018.2837502, 2018.
[30] S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, W.-C. Woo, "Convolutional LSTM network: a machine learning approach for precipitation now casting," Advances in Neural Information Processing Systems, vol. 28, pp. 802-810, 2015.
[31] J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, D. Rueckert, "A deep cascade of convolutional neural networks for dynamic MR image reconstruction," IEEE Transactions on Medical Imaging, vol. 37 no. 2, pp. 491-503, DOI: 10.1109/TMI.2017.2760978, 2017.
[32] S. Mahalakshmi, T. Velmurugan, "Detection of brain tumor by particle swarm optimization using image segmentation," Indian Journal of Science and Technology, vol. 8 no. 22,DOI: 10.17485/ijst/2015/v8i22/79092, 2015.
[33] S. H. Ong, N. C. Yeo, K. H. Lee, Y. V. Venkatesh, D. M. Cao, "Segmentation of color images using a two-stage self-organizing network," Image and Vision Computing, vol. 20 no. 4, pp. 279-289, DOI: 10.1016/S0262-8856(02)00021-5, 2002.
[34] S. Bhattacharya, P. K. R. Maddikunta, Q.-V. Pham, T. R. Gadekallu, C. L. Chowdhary, M. Alazab, M. J. Piran, "Deep learning and medical image processing for coronavirus (COVID-19) pandemic: a survey," Sustainable Cities and Society, vol. 65, article 102589,DOI: 10.1016/j.scs.2020.102589, 2021.
[35] A. Revathi, R. Kaladevi, K. Ramana, R. H. Jhaveri, M. R. Kumar, M. S. P. Kumar, "Early detection of cognitive decline using machine learning algorithm and cognitive ability test," Security and Communication Networks, vol. 2022,DOI: 10.1155/2022/4190023, 2022.
[36] O. Obulesu, S. Kallam, G. Dhiman, R. Patan, R. Kadiyala, Y. Raparthi, S. Kautish, "Adaptive diagnosis of lung cancer by deep learning classification using Wilcoxon gain and generator," Journal of Healthcare Engineering, vol. 2021,DOI: 10.1155/2021/5912051, 2021.
[37] S. Banik, N. Sharma, M. Mangla, S. N. Mohanty, S. Shitharth, "LSTM based decision support system for swing trading in stock market," Knowledge-Based Systems, vol. 239, article 107994,DOI: 10.1016/j.knosys.2021.107994, 2022.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2022 Shahanaz Ayub et al. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Today, many people under the age of 10 are being examined for brain-related issues, including tumours, without displaying any symptoms. It is not unusual for children to develop brain-related concerns such as tumours and central nervous system disorders, which may affect 15% of the population. Medical experts believe that the irregular eating habits (junk food) and the consumption of pesticide-tainted fruits and vegetables are to blame. The human body is naturally resistant to harmful gears, but only up to a point. If it exceeds the limit, a cell manipulation process is automatically initiated that can remove dangerous inactive tissues from the cell membrane and later grows into tumour blockage in the human body. Thus, the adoption of an advanced computer-based diagnostic system is highly recommended in order to generate visually enhanced images for anomaly identification and infectious tissue segmentation. In most cases, an MR image is chosen since it is easier to distinguish between affected and nonaffected tissue. Conventional convolution neural network (CCNN) mapping and feature extraction are difficult because of the vast volume of data. In addition, it takes a lengthy time for the MRI scanning process to obtain diverse positions for anomaly identification. Aside from the discomfort, the patient may experience motion abnormalities. Recurrent neural network (RNN) classifies tumour regions into several isolated portions much faster and more accurately, so that it can be prevented. To remove motion artefacts from dynamic multicontrast MR images, a novel long short-term memory- (LSTM-) based RNN framework is introduced in this research. With this method, the MR image’s visual quality is improved over CCNN while simultaneously mapping a larger volume and extracting more quiet characteristics than CCNN can. DC-CNN, SMSR-CNN, FMSI-CNN, and DRCA-CNN results are compared. For both low and high signal-to-noise ratios, the suggested LSTM-based RNN framework has gained reasonable feature intelligibility (SNRs). In comparison to previous approaches, it requires less computing and has higher accuracy when it comes to detecting infected portions.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details





1 Electronics and Communication Engineering Department, Bundelkhand Institute of Engineering and Technology, Jhansi, Uttar Pradesh, India
2 School of Computer & Engineering, Vellore Institute of Technology, Chennai Campus, India
3 Department of Computer Science & Engineering, Kebri Dehar University, Kebri Dehar, Ethiopia
4 Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
5 Department of Computer Applications, Annamacharya Institute of Technology and Sciences, Rajampet, 516 126 Andhra Pradesh, India