Content area
Full text
Introduction
The human Central Nervous System(CNS) comprises the brain and spinal cord, controlling most body activities like decision-making, processing, coordination, and sending instructions to other body parts. The brain has a complex anatomical structure, and disorders affecting the CNS include traumatic brain injury, multiple sclerosis, stroke, developmental anomalies, and brain tumor. A brain tumor is an unusual growth of cells inside the brain or neighboring tissues. It can originate from the brain or spread from other body parts (secondary or metastatic brain tumor). Symptoms vary based on location, size, and growth rate and may include persistent headaches, vision or hearing changes, balance issues, cognitive problems, memory loss, and nausea/vomiting. In 2020, it was the 10th most ordinary tumor type among Indians [1]. The Indian Council of Medical Research(ICMR) estimates around 30,000 new brain tumor cases and over 24,000 brain tumor-related deaths annually in India, with incidence rates rising [2]. While it can affect people of all ages, certain types are more common in specific age groups, affecting both children and adults. Diagnosis typically involves imaging tests like MRI, Computed Tomography (CT), and Positron Emission Tomography(PET) scans for comprehensive brain images. MRI scans are preferred for brain tumor imaging due to their high resolution and tissue differentiation capability. A biopsy may also be needed to determine the tumor type and grade. Timely and precise detection of brain tumors is essential to ensure effective treatment. Researchers have developed automatic classification models using Machine Learning(ML) and DL techniques to detect tumors [3]. CNN, a famous DL architecture, shows potential performance in medical image classification tasks [4]. Also, Transfer Learning(TL) methods have been utilized extensively to improve brain tumor classification [5, 6]. In this context, the choice of activation function is crucial for learning complex patterns from data. While ReLU is the most used activation function, it suffers from the "dying ReLU" problem; PReLU is an extension of the ReLU, which can mitigate the "dying ReLU" problem [7]. Another activation function called Swish, developed by P.Ramachandran et al. [8], has shown potential results in CNN on ImageNet data and can be utilized to mitigate the issues associated with ReLU and PReLU.
The significant contributions of our work are summarized below:
TL framework has been applied on seven...