Content area
Texture is a significant component used for several applications in content-based image retrieval. Any texture classification method aims to map an anonymously textured input image to one of the existing texture classes. Extensive ranges of methods for labeling image texture were proposed earlier. However, computing the performance of these methods in the presence of various degradations is always an open area of discussion. Image noise is always a dominant factor among various image degradation factors, affecting the performance of these methods and making texture classification challenging. Therefore, it is essential to investigate the interpretation of these methods in the presence of prominent degradation factors such as noise. Applications for Segmentation-Based Fractal Texture Features (SFTF) include image classification, texture generation, and medical image analysis. They are beneficial for examining textures with intricate, erratic patterns that are difficult to characterize using conventional statistical techniques accurately. This paper assesses two texture feature extraction methods based on SFTF and statistical moment-based texture features in the presence and absence of Gaussian noise. The SFTF and statistical moments-based handcrafted features are passed to a multilayer feed-forward neural network for classification. These models are evaluated on natural textures from Kylberg Texture Dataset 1.0. The results show the superiority of segmentation-based fractal analysis over other approaches. The average accuracy rates using the SFTF are 99% and 97% in the absence and presence of Gaussian noise, respectively.
1. Introduction
Texture classification has been one of the most attractive research areas in the information retrieval domain, which deals with the discrimination of textures. In a computer vision system, texture is the critical characteristic through which an image can be recognized. Due to this characteristic, texture classification plays a vital role in pattern recognition and computer vision. Images can be from medicine, industry, satellite, etc. Since various classifiers exist, the main challenge is designing an effective classifier model with suitable feature extraction from a given textured image [1]. Many classifier models have been proposed with features such as statistics [2], Gabor filters [3], etc. Images are acquired to record or visualize valuable information. However, the captured image may embody a blurred version of the original scene due to inadequacies in the imaging and recording processes. An extensive range of diverse degradations is required to be considered, such as noise impositions, geometrical factors, illumination, color imperfections, and blurring of images [4]. Among all these, noise is one of the most common degradation factors in images, degrading the classifier’s performance heavily. Generally, image noise is a significant factor that can influence the performance of a texture classification system.
Content-based image retrieval is a dominant research area when millions of images are available for search in the database. Texture-based information retrieval has been used successfully by researchers with other features such as shape and color [5, 6]. There are varieties of algorithms for texture feature detection. A few prominent methods are discussed here. The extraction of features like those that contrast, uniformity, and entropy from the Gray Level Co-Occurrence Matrix (GLCM) for feature detection has been suggested by various authors during the last decades. Rotation invariant features have offered Local Binary Pattern (LBP) for texture classification [1]. LBP has proven to be the most computationally efficient, high-performance texture feature, but these features are susceptible to noise and are incapable of extracting macrostructure textures. Many authors have utilized local image features, for example, boundaries, edges, and blobs, for texture categorization [7]. Authors offer Scale Invariant Feature Transform (SIFT) for different applications [8]. SIFT descriptors are invariant to scaling, translation, and rotation. Speedup Robust Features (SURF) features are also used to identify images’ local features [9]. SURF features are calculated in terms of key points, established by describing the intensity distribution of pixels. Region-based methods are used to partition the image pixels into groups conforming to logical image qualities like illumination, color, edges, sharpness, and texture [10]. Local Binary Patterns have developed as the most noticeable and extensively considered local texture descriptors. Pan et al. [9] have proposed a low-dimensional FbLBP that can be quickly constructed without needing parameter tuning for different databases. In this work, 4 texture databases, namely CUReT, Outex, XU_HR, and UIUC are used, and the proposed FbLBP-based method has achieved more than 10% improvement relative to conventional LBP and 1–3% improvement close to the finest classification accuracy among other LBP variants. Dong et al. [11, 12] have suggested a local descriptor for texture classification termed extremal pattern and locally directional (LDEP). LDEP fetches the extremum location pattern (ELP), extremum compression pattern (ECP), extremum difference pattern (EDP), and directional local difference count pattern (DLDCP), from the sampling points and the neighbor’s extremum related local pattern (NERLP). The experiment is conducted on four texture databases, namely Brodatz, Prague, CUReT, Kth-tips2-a, Stex, and UIUC. The results have demonstrated that the proposed LDEP descriptor can attain comparable accuracy in accurate classification rates in diverse conditions such as rotation, viewpoint variation, noise, scale variation, and illumination with other prominent texture classification techniques.
In the last decade, deep neural networks have gained popularity due to their ability to learn in supervised and unsupervised modes. Various authors offered notable contributions for texture classification using different deep learning models such as Convolutional Neural Network (ConvNet) [13], Capsule Network (CapsNet) [14], fused ConvNet (TexFusionNet) [15], Contourlet Convolutional Neural Network (C-CNN) [16], Bilinear Convolutional Neural Network (BCNN) [17] and Texture CNN (T-CNN) [18]. Researchers have provided many solutions in the past decade for texture classification using deep learning models. However, training a deep convolutional neural network from scratch is challenging since it requires a large amount of labeled training data and considerable experience to ensure appropriate convergence. On the other hand, the methods based on feature engineering must be evaluated in the presence of noise and other degradation factors.
In this work, the impact of noise is explored once the descriptor parameters have been improved for an image dataset. Additive white Gaussian noise is employed as the noise model. A sample increases each pixel’s intensity from a Gaussian distribution. This noise model is highly suited to describe thermal noise in CCD and CMOS sensors, which are image sensors. This paper explores the performance of two different texture classification models in the absence and presence of Gaussian noise. These classification models are based on statistical moment-based features and segmentation-based fractal texture features (SFTF). These features are used to design the classification models with a feed-forward neural network. The contribution of the work is to:
* Offer a texture classification method based on segmentation-based fractal texture features.
* Compare the proposed methods with statistical moment-based features.
* Evaluate the method’s robustness in significant Gaussian noise in texture images.
The remaining paper is structured into four sections. Material and methods are presented in Section 2. It is followed by Sections 3 and 4, which contain results and discussion, respectively. Conclusive remarks and future scope are given in Section 5.
2. Materials and methods
2.1 Models
Mathematical models are essential for real-world image formation and degradation processes. The following subsections discuss the image degradation model and noise model.
2.1.1 Image degradation model.
An input intensity distribution is transformed into an output intensity distribution during the image formation. The input distribution signifies the true (ideal) image, which is not directly accessible but which is required to recover or at least approximate by suitable action over the degradation. In a two-dimensional linear imaging system, the association between the input intensity distribution and the measured output intensity distribution g(x, y) is denoted as a linear superposition integral provided by Eq 1 [19].
(1)
Here, the term h(x−x′,y−y′) is the linear point spread function, also called impulse response, and η(x,y) is additive noise. The above equation can be expressed as convolution for Linear Shift Invariant (LSI) systems, as mentioned below in Eq 2.
(2)
Here, g(x,y) is the blurred image, f(x,y) is the uncorrupted true image, h(x,y) is the point spread function (PSF) that produced the blurring, and η(x,y) is the additive noise. All these terms are in the spatial domain. The convolution operator in the spatial domain is replaced by the multiplication operator in the frequency domain, so Eq 2 can be written as given in Eq 3.
(3)
Here, G(u,v) is the blurred image, F(u,v) is an uncorrupted original image, H(u,v) is the blurring function, which does the blurring and N(u,v) is the additive noise term. All these terms are represented using frequency domain notations.
2.1.2 Image noise model.
The image degradation model, as discussed in an earlier section, consists of additive noise terms, which symbolize the possession of one of the several types of noise. This noise may be available at the time of image acquisition. This additive term appropriately models the noise in Charged Coupled Devices (CCD) cameras. When a CCD camera is used for imaging, multiple noise sources are present in the surroundings. One of these sources is modeled in this work and referred to as Gaussian noise. The Gaussian distribution models this noise as given in Eq 4 [20].
(4)
Here σ = Standard Deviation of Gaussian distribution.
In this work, the Peak Signal-to-Noise Ratio (PSNR) metric is used to define the noise variance added to blurred images.
2.2 Segmentation-based fractal texture features
Fractal dimension is a widely used texture measure. It indicates the self-similarity at several scales of an object pattern. For, if bounded set S is the union of t distinct copies of itself, then S will be self-similar, here a proportion of scales down each copy p. The fractal dimension (fd) defined by Eq 5 is given below:(5)
We can estimate the roughness and irregularity in the object’s surface through fractal dimension. The higher value of the fractal dimension indicates a coarser texture.
The Segmentation-based Fractal Texture Features algorithm is an efficient method for the extraction of texture. It consists of 2 main steps. In the first step, gray-level images are decomposed into 2*n of two-level images by the Two-Threshold Binary Decomposition (TTBD) algorithm, where n indicates the number of the threshold. In the 2nd step, decomposed binary images are utilized to calculate fractal dimension, area, and mean gray level. Due to this feature of SFTA, it is used in texture classification and object detection. Its algorithm is described below [21]:
Step1: multi-level Otsu thresholding algorithm
Step2: ThC≔{(th: thi, thi+1): thi, thi+1∈Th and i∈[1……..|Th|−1]
Step3: ThP≔{(th: thi, lmax): thi∈Th and i∈[1……..|Th|−1]}
Step 4: j∶ = 1.
for i∶ = 1:n
IC≔TTS(I,t). where t∈ThC
// Binary Decomposition for input image I
IP≔TTS(I, t). where t∈ThP
∂C(x,y)≔FB(IC)
∂P(x,y)≔FB(IP)
j∶ = j + 6
end for
2.3 Segmentation-based fractal texture features
The statistical features are used to fetch the parameters like mean, skewness, standard distribution, entropy, etc., of textured images. Let λi be a discrete random variable, which signifies diverse levels in a map, and let p(λi) be the Probability Density Function (PDF). A histogram indicates the probability of occurrence of values λj as measured by p(λj). The feature set of every histogram has the following six features as defined in Eqs (6–11) [22].
* Mean- mean calculates the average value.
(6)
* Standard deviation- It calculates the mean contrast.
(7)
* Smoothness—Smoothness calculates the relative smoothness of the gray intensities of a particular segment.
(8)
* Skewness–Skewness computes the symmetry of the distribution.
(9)
* Uniformity–Uniformity is also known as energy.
(10)
* Entropy–Entropy is a measure of randomness.
(11)
2.4 Multilayer Feed Forward Neural Network and Back Propagation training algorithm
An Artificial Neural Network (ANN) or a Neural Network (NN) is a biologically inspired machine-learning algorithm. A typical neural network structure consists of an interconnected network of ordinary processing units. It provides a robust data modeling mechanism that is used to establish complex input and output relationships. The motivation for the growth of neural network computing developed from the aspiration to design an unconventional method of computing and to understand the processing of human intelligence. NN identically processes data to what the human brain does. The network architecture consists of a massive number of organized processing elements termed neurons. These neurons perform in a parallel manner to provide the solution to a particular problem. The neural network model works on the principle of learning by example. These examples must be chosen cautiously; otherwise, the model may work incorrectly. The problem is that there is no way to recognize whether the framework is decent, except if an error happens [23].
The structure of a neural network consists of neurons as building blocks. A neuron performs similarly to what the biological one does. It receives multiple data sources with distinct weights and has one output, which depends upon the information sources. A biological neuron can either ’fire’ or not "fire" (when a neuron fires, it yields a beat of a couple hundred Hz). In an artificial neuron, ’firing’ is typically spoken by a consistent one and not ’firing’ by a zero. There is a wide range of sorts of neural systems now being used. The methods vary from each other in their design and the preparation calculations.
There are two principal sorts of learning: supervised and unsupervised learning. Supervised learning, for example, with Multilayer Perceptron (MLP), implies that the neural system knows the ideal yield and that the altering of weight coefficients is done in such a way that the determined and required outcomes are as close as could be expected under the circumstances. Unsupervised preparation of, for example, a Kohonen neural network implies that the ideal product isn’t known; the framework is given a gathering of certainties (samples) and afterward left to itself to settle down (or not) to a steady state in some number of repetitions.
The abilities of Multilayer Feed Forward Neural Network (MLFNN) derive from the non-linearities available within the neuron unit. Every neuron in the network accepts inputs from former neurons in the network or accepts inputs provided externally (referred to as bias). The yields of the neurons are associated with different neurons or with the outside world. Each piece of information is associated with the neurons by weight. The neuron ascertains the weighted aggregate of the sources of information (referred to as activation), which goes through a non-straight exchange capacity to create the actual yield for the neuron. The most applied activation functions are of the sigmoidal kind. A joint Back Propagation Neural Network (BPNN) consists of an input layer, one hidden layer, and an output layer [24].
Among the calculations used to perform supervised training, the backpropagation calculation has developed as the most widely utilized and fruitful calculation for the design of feed-forward systems. In this mode, the genuine yield of a neural system is contrasted with the ideal outcome. Loads, which are typically set haphazardly in any case, are then balanced by the system with the goal that the following emphasis, or cycle, will create a closer match between the ideal and the genuine yield. There are two unmistakable stages of the activity of back-propagated learning: the forward stage and the regressive stage. In the forward step, the information signals spread through the system layer by layer, in the end creating some reaction at the system’s yield. The real response created is contrasted with the ideal reaction, producing blunder flags that are then proliferated in a regressive way through the system. In this retrogressive period of activity, the parameters of the system are balanced to limit the total of the squared blunders. The steps in the BPN calculation are given as [25]:
1. Step 1: Randomly weight initialization.
2. Step 2: Till the termination condition remains false, repeat steps 3 to 10.
3. Step 3: For every training pair x: t, do steps 4 to 9.
4. Step 4: Every input unit Xi, i = 1,2,3…,n receives the input signal, xi, and broadcasts it to the next layer.
5. Step 5: For every neuron of the hidden layer (Zj) where j = 1,2,3….,p.
broadcast Zj to the next layer. Where voj is the bias on jth hidden unit.
1. Step 6: For every neuron of output layer Yk, k = 1,2,….,m
1. Step 7: Calculate δk for every output neuron,
where δk is the portion of error correction weight adjustment for wjk i.e. due to an error at the output unit yk, that is back propagated to the hidden unit, which passed to the previous unit yk−1 and α is the learning rate.
1. Step 8: For every neuron of hidden layer
Where δj is the part of error adjustment on weight modification for vij i.e. owing to the backpropagation of error to the hidden unit zj
1. Step 9: Revise the weights.
1. Step 10: Stop if reached to a specific error level.
In the case of supervised learning of the artificial neural network, it goes through learning before application. During learning, input and output data are provided to the network. This input and output data together form the learning data set. Training sets are required to be of enough size to contain all the required information. The test data set is applied to the network after finishing the learning. Testing is crucial to validate the network performance. It is used to understand the behavior of the trained network with unseen data. If a trained network does not give realistic outputs, it resembles that the network has not generalized.
2.5 Multilayer Feed Forward Neural Network and Back Propagation training algorithm
To carry out the texture classification, features are extracted from input images. Before feature extraction, images are resized. These features are passed to the neural network classification model for learning and labeling. The overall process is represented in Fig 1.
[Figure omitted. See PDF.]
2.6 Image dataset
The classifier models are evaluated on natural textures from Kylberg Texture Dataset v. 1.0 [26]. Fig 2 shows examples of texture images in the Kylberg Texture Dataset. This dataset comprises 28 classes of natural textures, which are macro photographs of real-world surfaces. Each class has 1920 patches of gray-scale images normalized with an average value of 127 and a standard deviation of 40. The patches have a resolution of 576×576 pixels and are resized into 256×256 pixels for feature extraction. To conduct the experiments in the presence of noise, 40 dB PSNR Gaussian Noise is synthetically introduced into all images. Work is implemented using image processing and neural network toolbox available in Matlab 6.5. as shown in Fig 2.
[Figure omitted. See PDF.]
3. Experiment & results analysis
A feed-forward neural network with backpropagation as the network training algorithm is used to make the classification models. The extracted features are used to make the training dataset. The learning and testing feature set is standardized in the [0, 1] range, while the output class is allocated to logical zero for minimum probability and logical one for maximum likelihood. The sigmoid function is used as the transfer function for the hidden layers. The count of neurons in the input layer is the same as the count of features extracted from the dataset of the image, which is twenty-four for the first model with SFTF features and six for statistical texture features. Learning and testing of neural networks are carried out with a different number of hidden layers, which will help in finding the optimum count of hidden layers. One hidden layer comprising 10 neurons gave the best performance in the final architecture. Twenty-eight neurons are used in the output layer, corresponding to the number of classes in the image database. To prevent early stopping, data is divided into training sets with a 50% share, validation sets with a 20% share, and testing develops with a 30% share. After each epoch, a validation error is recorded. With the start of overfitting, the validation set error also starts increasing. When the validation error exceeds a specific epoch count, the training procedure is terminated and a trained model having the minimum validation error at that epoch’s level is used.
Statistical metrics, namely accuracy, sensitivity, specificity, precision, and F-score, are used to evaluate the classification model. The explanation of each metric is as follows [27, 28]:
Precision: It indicates about what %age of +ive classification is actual.
Sensitivity: It indicates that between the total prediction, what %age of the +ive prediction is actual. It is also called recall.
Specificity: It indicates that between the total prediction, what %age of -ive predictions are actual. It is also called selectivity.
F-score: It is the weighted mean of precision and recall to recall and precision.
Table 1 presents the results for Segmentation-based fractal texture features with and without noise. The average accuracy of the model is 99%, and in the presence of Gaussian noise, the accuracy is reduced to 97%. Table 2 presents the results for the statistical moment-based texture analysis method in the absence and existence of Gaussian noise. The mean accuracy of the model is 98%, and in the presence of Gaussian noise, the accuracy is reduced to 95%.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
4. Discussion
Sensitivity, accuracy, precision, specificity, and F-score are the most important parameters utilized to analyze the performance of any classifier model. Accuracy represents the correctness of the classifier in predicting the correct class [29]. The results are also compared with Convolutional Neural Networks. ConvNets are a subclass of neural networks that are mostly employed in speech and picture recognition applications. With no loss of information, its integrated convolutional layer lowers the high dimensionality of images. Convnets are, therefore, very well suited for texture classification. Five convolutional layers, four max-pooling layers, two fully connected layers, and a Softmax classifier output layer make up the ConvNet model being used. The activation function of the rectified linear unit is contained in the first convolution layer. It helps the model do better and acquire knowledge more quickly. Max pool layer and convolution are combined from layer one to layer five. A max pooling layer with a size of 2 x 2 and a stride of 2 is applied after each convolution layer. 3.0 kernel size and the number of filters applied. Fig 3 displays the comparison of different classifiers on the scale of accuracy. The segmentation-based fractal texture feature has given the best accuracy, as shown in Fig 3.
[Figure omitted. See PDF.]
Precision is a measure of consistency. It represents the ability of the classifier to return only relevant cases. Fig 4 shows the comparison of different classifiers on the scale of precision. The segmentation-based fractal texture feature has the best value in terms of precision. Sensitivity represents the ability of the classifier to recognize all relevant cases correctly. Fig 5 shows the comparison of different classifiers on the scale of sensitivity. Segmentation-based fractal texture feature has the best value of sensitivity.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
There is always a trade-off between recall and precision. Which parameter between precision and recall should be maximized depends on the problem. However, there is another metric through which we can consider both precision and recall at the same time, it is called the F-score. Instead of striking a balance between the two metrics, the aim is to maximize a single parameter, the F-score. The F-score is the harmonic mean of recall and precision. Fig 6 depicts the comparison of different classifiers on the scale of the F-score. Segmentation-based fractal texture feature has given the best value of the F-score, as shown in Figs 4 and 5.
[Figure omitted. See PDF.]
Specificity tells us how many percentages of actual negative value have correctly identified as -ive value. Fig 7 shows the comparison of different classifiers on the scale of specificity. It has been found that the segmentation-based fractal texture feature has the best value of specificity.
[Figure omitted. See PDF.]
The collective analysis concludes that segmentation-based fractal texture features have shown superior results to statistical moments-based texture features. It has been proven from the results that SFTF is robust not only in quality images but also in noisy images. In the presence of Gaussian noise, the accuracy of the SMBTF-based model is reduced to 95% from 98%, whereas the accuracy of the SFTF model is just reduced to 97% from 99%. It confirms that SFTF features are more robust to noise than SMBTF features, as shown in Figs 6 and 7.
Table 3 presents a performance comparison of the proposed method with prominent work for classification on the Kylberg Texture Dataset. It is evident from the results that the proposed method consisting of Segmentation-Based Fractal Texture Features and Neural Network provides comparable performance in most of the ways.
[Figure omitted. See PDF.]
5. Conclusions
Texture-based classification is one of the prominent methods for content-based image retrieval. Most different texture classifiers work in two stages. Its first phase is feature extraction, which generates a feature-measure-based classification of each texture class. Recognizing and choosing distinctive features insensitive to irrelevant image transformations like translation, rotation, and scaling is essential. For similar textures, the metrics of selected features must presumably be comparable. However, designing a generalizable classifier is a challenging task, and most existing ones are application-specific and require varying degrees of domain expertise. However, these methods suffer from the existence of noise. It is of the utmost importance to propose and analyze a texture classification method in the presence of noise. The effect of noise is explored once the descriptor parameters have been tuned for each dataset in this work. Two different sets of texture features have been evaluated for a texture database of twenty-eight classes in the presence of noise. Two separate experiments are carried out for each stage of features. In the first experiment, no degradation is considered, while in the second experiment, Gaussian noise of 40 dB is regarded as a degradation factor. It is evident that the performance of the first model, which utilizes segmentation-based fractal texture features, is better than the second model, which uses statistical moment-based texture analysis features. In the future, this texture categorization work can be extended by analyzing the performance in the presence of other image degradation factors such as blur view invariance, cluttered backgrounds, occlusion, and noise.
References
1. 1. Timo O., Pietikainen M. and Maenpaa T., "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns." IEEE Transactions on pattern analysis and machine intelligence 24, no. 7 (2002): 971–987.
* View Article
* Google Scholar
2. 2. Varma M. and Zisserman A., "A statistical approach to texture classification from single images." International journal of computer vision 62 (2005): 61–81.
* View Article
* Google Scholar
3. 3. Sundaram A., Ganesan L. and Priyal S. P., "Texture classification using Gabor wavelets based rotation invariant features." Pattern recognition letters 27, no. 16 (2006): 1976–1982.
* View Article
* Google Scholar
4. 4. Sonia O., Chambah M., Herbin M. and Zagrouba E., "Are existing procedures enough? Image and video quality assessment: review of subjective and objective metrics." Image Quality and System Performance V 6808 (2008): 240–250.
* View Article
* Google Scholar
5. 5. Matti P., Mäenpää T. and Viertola J., "Color texture classification with color histograms and local binary patterns." In Workshop on texture analysis in machine vision, vol. 1, pp. 109–112. New York, NY, USA: Citeseer, 2002.
6. 6. Shirazi S., Hamad A. I., Umar S. Naz, I. Razzak et al., "Content-based image retrieval using texture color shape and region." (2016).
* View Article
* Google Scholar
7. 7. Abuobayda S. M. and Tapamo J., "A comparative study of the use of local directional pattern for texture-based informal settlement classification." Journal of applied research and technology 15, no. 3 (2017): 250–258.
* View Article
* Google Scholar
8. 8. Y. Yang and S. Newsam, "Comparing SIFT descriptors and Gabor texture features for classification of remote sensed imagery." In 2008 15th IEEE international conference on image processing, pp. 1852–1855. IEEE, 2008.
9. 9. Pan Z., Li Z., Fan H. and Wu X., "Feature based local binary pattern for rotation invariant texture classification." Expert Systems with Applications 88 (2017): 238–248.
* View Article
* Google Scholar
10. 10. Ahmed K. T. and Iqbal M. A., "Region and texture based effective image extraction." Cluster Computing 21 (2018): 493–502.
* View Article
* Google Scholar
11. 11. Dong X., Zhou H. and Dong J., "Texture classification using pair-wise difference pooling-based bilinear convolutional neural networks." IEEE Transactions on Image Processing 29 (2020): 8776–8790. pmid:32866099
* View Article
* PubMed/NCBI
* Google Scholar
12. 12. Dong Y., Wang T., Yang C., Zheng L., Song B. et al., "Locally directional and extremal pattern for texture classification." IEEE Access 7 (2019): 87931–87942.
* View Article
* Google Scholar
13. 13. Tiwari S., "An analysis in tissue classification for colorectal cancer histology using convolution neural network and colour models." International Journal of Information System Modeling and Design (IJISMD) 9, no. 4 (2018): 1–19.
* View Article
* Google Scholar
14. 14. Tiwari S.< "Dermatoscopy using multilayer perceptron, convolution neural network, and capsule network to differentiate malignant melanoma from benign nevus." International Journal of Healthcare Information Systems and Informatics (IJHISI) 16, no. 3 (2021): 58–73.
* View Article
* Google Scholar
15. 15. S. K. Roy, S. R. Dubey, B. Chanda, B.B. Chaudhuri, D. K. Ghosh et al., "Texfusionnet: an ensemble of deep cnn feature for texture classification." In Proceedings of 3rd International Conference on Computer Vision and Image Processing: CVIP 2018, Volume 2, pp. 271–283. Springer Singapore, 2020.
16. 16. Liu M., Jiao L., Liu X., Li L., Liu F., et al., "C-CNN: Contourlet convolutional neural networks." IEEE Transactions on Neural Networks and Learning Systems 32, no. 6 (2020): 2636–2649. pmid:32692683
* View Article
* PubMed/NCBI
* Google Scholar
17. 17. C. B. Nsimba and A. L. Levada, "Exploring information theory and gaussian markov random fields for color texture classification." In Image Analysis and Recognition: 17th International Conference, ICIAR 2020, Póvoa de Varzim, Portugal, June 24–26, 2020, Proceedings, Part II 17, pp. 130–143. Springer International Publishing, 2020.
18. 18. Andrearczyk V. and Whelan P. F., "Using filter banks in convolutional neural networks for texture classification." Pattern Recognition Letters 84 (2016): 63–69.
* View Article
* Google Scholar
19. 19. Tiwari S., "Blur classification using segmentation based fractal texture analysis." Indonesian Journal of Electrical Engineering and Informatics (IJEEI) 6, no. 4 (2018): 373–384.
* View Article
* Google Scholar
20. 20. Gonzale R. C. and Woods R. E., "Digital image processing: Pearson prentice hall." Upper Saddle River, NJ 1 (2008): 376–376.
* View Article
* Google Scholar
21. 21. A. F. Costa, G. Humpire-Mamani and Traina, "An efficient algorithm for fractal analysis of textures." In 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images, pp. 39–46. IEEE, 2012.
22. 22. H. Abbas, F. Ghali and M. S. Alhassan, "Image Classification Schemes for Statistical Moments of Wavelet and Gradient Matrix." In 2022 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp. 1–6. IEEE, 2022.
23. 23. Haldorai A. and Ramu A., "Canonical correlation analysis based hyper basis feed-forward neural network classification for urban sustainability." Neural Processing Letters 53, no. 4 (2021): 2385–2401.
* View Article
* Google Scholar
24. 24. Elamir M. S., Gotzig H., Zöllner R. and Mäder P., "A feed-forward neural network for direction-of-arrival estimation." The journal of the acoustical society of America 147, no. 3 (2020): 2035–2048.
* View Article
* Google Scholar
25. 25. Demuth H. and Beale M., “Neural Network Toolbox for Use with MATLAB: User’s Guide”; Computation, Visualization, Programming. MathWorks Incorporated, 1998.
* View Article
* Google Scholar
26. 26. Kylberg G., Kylberg texture dataset v. 1.0. Centre for Image Analysis, Swedish University of Agricultural Sciences and Uppsala University, 2011.
27. 27. C. Goutte and E. Gaussier, "A probabilistic interpretation of precision, recall and F-score, with implication for evaluation." In Advances in Information Retrieval: 27th European Conference on IR Research, ECIR 2005, Santiago de Compostela, Spain, March 21–23, 2005. Proceedings 27, pp. 345–359. Springer Berlin Heidelberg, 2005.
28. 28. Juba B. and Le H. S., "Precision-recall versus accuracy and the role of large data sets." In Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, pp. 4039–4048. 2019.
* View Article
* Google Scholar
29. 29. Tiwari S., Jain A., Sharma A. K. and Almustafa K. M., "Phonocardiogram signal based multi-class cardiac diagnostic decision support system." IEEE Access 9 (2021): 110710–110722.
* View Article
* Google Scholar
30. 30. Aggarwal A. and Kumar M., "Image surface texture analysis and classification using deep learning." Multimedia Tools and Applications 80 (2021): 1289–1309.
* View Article
* Google Scholar
31. 31. Kaya Y., Ertuğrul Ö.F. and Tekin R., "Two novel local binary pattern descriptors for texture analysis." Applied Soft Computing 34 (2015): 728–735.
* View Article
* Google Scholar
32. 32. Kylberg G. and Sintorn I. M., "Evaluation of noise robustness for local binary pattern descriptors in texture classification." EURASIP Journal on Image and Video Processing 2013 (2013): 1–20.
* View Article
* Google Scholar
Citation: Tiwari S, Sharma AK, Abdul Aziz I, Gupta D, Jain A, Mahdin H, et al. (2025) Investigations on segmentation-based fractal texture for texture classification in the presence of Gaussian noise. PLoS ONE 20(1): e0315135. https://doi.org/10.1371/journal.pone.0315135
About the Authors:
Shamik Tiwari
Roles: Conceptualization, Data curation, Formal analysis, Investigation, Methodology
Affiliation: School of Computer Science & Engineering, IILM University, Gurugram, India
Akhilesh Kumar Sharma
Roles: Funding acquisition, Methodology, Project administration, Resources, Writing – review & editing
E-mail: [email protected] (AKS); [email protected] (IAA)
Affiliation: Department of Data Science & Engineering, School of Information Security & Data Science, Manipal University Jaipur, Jaipur, Rajasthan, India
ORICD: https://orcid.org/0000-0002-7308-7800
Izzatdin Abdul Aziz
Roles: Funding acquisition, Resources, Supervision, Validation, Visualization
E-mail: [email protected] (AKS); [email protected] (IAA)
Affiliation: Center for Research in Data Science (CeRDaS), Computer and Information Science Department (CISD), Universiti Teknologi PETRONAS (UTP), Seri Iskandar, Perak Darul Ridzuan, Malaysia
Deepak Gupta
Roles: Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Writing – review & editing
Affiliation: Department of Computer Science and Engineering, Institute of Technology & Management, Gwalior, India
Antima Jain
Roles: Conceptualization, Funding acquisition, Investigation, Project administration, Resources, Supervision, Validation
Affiliation: School of Computer Science & Engineering, VIT University, Bhopal, India
Hairulnizam Mahdin
Roles: Conceptualization, Funding acquisition, Investigation, Resources, Supervision, Validation, Writing – review & editing
Affiliation: Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Johor, Malaysia
ORICD: https://orcid.org/0000-0002-2275-0094
Senthil Athithan
Roles: Conceptualization, Investigation, Software, Validation, Visualization
Affiliation: Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur, Andhra Pradesh, India
Rahmat Hidayat
Roles: Formal analysis, Resources
Affiliation: Department of Information Technology, Politeknik Negeri Padang, Padang, Sumatera Barat, Indonesia
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
1. Timo O., Pietikainen M. and Maenpaa T., "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns." IEEE Transactions on pattern analysis and machine intelligence 24, no. 7 (2002): 971–987.
2. Varma M. and Zisserman A., "A statistical approach to texture classification from single images." International journal of computer vision 62 (2005): 61–81.
3. Sundaram A., Ganesan L. and Priyal S. P., "Texture classification using Gabor wavelets based rotation invariant features." Pattern recognition letters 27, no. 16 (2006): 1976–1982.
4. Sonia O., Chambah M., Herbin M. and Zagrouba E., "Are existing procedures enough? Image and video quality assessment: review of subjective and objective metrics." Image Quality and System Performance V 6808 (2008): 240–250.
5. Matti P., Mäenpää T. and Viertola J., "Color texture classification with color histograms and local binary patterns." In Workshop on texture analysis in machine vision, vol. 1, pp. 109–112. New York, NY, USA: Citeseer, 2002.
6. Shirazi S., Hamad A. I., Umar S. Naz, I. Razzak et al., "Content-based image retrieval using texture color shape and region." (2016).
7. Abuobayda S. M. and Tapamo J., "A comparative study of the use of local directional pattern for texture-based informal settlement classification." Journal of applied research and technology 15, no. 3 (2017): 250–258.
8. Y. Yang and S. Newsam, "Comparing SIFT descriptors and Gabor texture features for classification of remote sensed imagery." In 2008 15th IEEE international conference on image processing, pp. 1852–1855. IEEE, 2008.
9. Pan Z., Li Z., Fan H. and Wu X., "Feature based local binary pattern for rotation invariant texture classification." Expert Systems with Applications 88 (2017): 238–248.
10. Ahmed K. T. and Iqbal M. A., "Region and texture based effective image extraction." Cluster Computing 21 (2018): 493–502.
11. Dong X., Zhou H. and Dong J., "Texture classification using pair-wise difference pooling-based bilinear convolutional neural networks." IEEE Transactions on Image Processing 29 (2020): 8776–8790. pmid:32866099
12. Dong Y., Wang T., Yang C., Zheng L., Song B. et al., "Locally directional and extremal pattern for texture classification." IEEE Access 7 (2019): 87931–87942.
13. Tiwari S., "An analysis in tissue classification for colorectal cancer histology using convolution neural network and colour models." International Journal of Information System Modeling and Design (IJISMD) 9, no. 4 (2018): 1–19.
14. Tiwari S.< "Dermatoscopy using multilayer perceptron, convolution neural network, and capsule network to differentiate malignant melanoma from benign nevus." International Journal of Healthcare Information Systems and Informatics (IJHISI) 16, no. 3 (2021): 58–73.
15. S. K. Roy, S. R. Dubey, B. Chanda, B.B. Chaudhuri, D. K. Ghosh et al., "Texfusionnet: an ensemble of deep cnn feature for texture classification." In Proceedings of 3rd International Conference on Computer Vision and Image Processing: CVIP 2018, Volume 2, pp. 271–283. Springer Singapore, 2020.
16. Liu M., Jiao L., Liu X., Li L., Liu F., et al., "C-CNN: Contourlet convolutional neural networks." IEEE Transactions on Neural Networks and Learning Systems 32, no. 6 (2020): 2636–2649. pmid:32692683
17. C. B. Nsimba and A. L. Levada, "Exploring information theory and gaussian markov random fields for color texture classification." In Image Analysis and Recognition: 17th International Conference, ICIAR 2020, Póvoa de Varzim, Portugal, June 24–26, 2020, Proceedings, Part II 17, pp. 130–143. Springer International Publishing, 2020.
18. Andrearczyk V. and Whelan P. F., "Using filter banks in convolutional neural networks for texture classification." Pattern Recognition Letters 84 (2016): 63–69.
19. Tiwari S., "Blur classification using segmentation based fractal texture analysis." Indonesian Journal of Electrical Engineering and Informatics (IJEEI) 6, no. 4 (2018): 373–384.
20. Gonzale R. C. and Woods R. E., "Digital image processing: Pearson prentice hall." Upper Saddle River, NJ 1 (2008): 376–376.
21. A. F. Costa, G. Humpire-Mamani and Traina, "An efficient algorithm for fractal analysis of textures." In 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images, pp. 39–46. IEEE, 2012.
22. H. Abbas, F. Ghali and M. S. Alhassan, "Image Classification Schemes for Statistical Moments of Wavelet and Gradient Matrix." In 2022 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp. 1–6. IEEE, 2022.
23. Haldorai A. and Ramu A., "Canonical correlation analysis based hyper basis feed-forward neural network classification for urban sustainability." Neural Processing Letters 53, no. 4 (2021): 2385–2401.
24. Elamir M. S., Gotzig H., Zöllner R. and Mäder P., "A feed-forward neural network for direction-of-arrival estimation." The journal of the acoustical society of America 147, no. 3 (2020): 2035–2048.
25. Demuth H. and Beale M., “Neural Network Toolbox for Use with MATLAB: User’s Guide”; Computation, Visualization, Programming. MathWorks Incorporated, 1998.
26. Kylberg G., Kylberg texture dataset v. 1.0. Centre for Image Analysis, Swedish University of Agricultural Sciences and Uppsala University, 2011.
27. C. Goutte and E. Gaussier, "A probabilistic interpretation of precision, recall and F-score, with implication for evaluation." In Advances in Information Retrieval: 27th European Conference on IR Research, ECIR 2005, Santiago de Compostela, Spain, March 21–23, 2005. Proceedings 27, pp. 345–359. Springer Berlin Heidelberg, 2005.
28. Juba B. and Le H. S., "Precision-recall versus accuracy and the role of large data sets." In Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, pp. 4039–4048. 2019.
29. Tiwari S., Jain A., Sharma A. K. and Almustafa K. M., "Phonocardiogram signal based multi-class cardiac diagnostic decision support system." IEEE Access 9 (2021): 110710–110722.
30. Aggarwal A. and Kumar M., "Image surface texture analysis and classification using deep learning." Multimedia Tools and Applications 80 (2021): 1289–1309.
31. Kaya Y., Ertuğrul Ö.F. and Tekin R., "Two novel local binary pattern descriptors for texture analysis." Applied Soft Computing 34 (2015): 728–735.
32. Kylberg G. and Sintorn I. M., "Evaluation of noise robustness for local binary pattern descriptors in texture classification." EURASIP Journal on Image and Video Processing 2013 (2013): 1–20.
© 2025 Tiwari et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.