About the Authors:
Anny K. G. Rodrigues
Roles Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing
Affiliation: Departamento de Estatística, CASTLab, CCEN, Universidade Federal de Pernambuco, Cidade Universitária, Recife, PE, Brazil
Raydonal Ospina
Roles Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing
* E-mail: [email protected]
Affiliation: Departamento de Estatística, CASTLab, CCEN, Universidade Federal de Pernambuco, Cidade Universitária, Recife, PE, Brazil
ORCID logo https://orcid.org/0000-0002-9884-9090
Marcelo R. P. Ferreira
Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing
Affiliation: Departamento de Estatística, DataLab, Centro de Ciências Exatas e da Natureza, Universidade Federal da Paraíba, João Pessoa, PB, Brazil
Abstract
Many machine learning procedures, including clustering analysis are often affected by missing values. This work aims to propose and evaluate a Kernel Fuzzy C-means clustering algorithm considering the kernelization of the metric with local adaptive distances (VKFCM-K-LP) under three types of strategies to deal with missing data. The first strategy, called Whole Data Strategy (WDS), performs clustering only on the complete part of the dataset, i.e. it discards all instances with missing data. The second approach uses the Partial Distance Strategy (PDS), in which partial distances are computed among all available resources and then re-scaled by the reciprocal of the proportion of observed values. The third technique, called Optimal Completion Strategy (OCS), computes missing values iteratively as auxiliary variables in the optimization of a suitable objective function. The clustering results were evaluated according to different metrics. The best performance of the clustering algorithm was achieved under the PDS and OCS strategies. Under the OCS approach, new datasets were derive and the missing values were estimated dynamically in the optimization process. The results of clustering under the OCS strategy also presented a superior performance when compared to the resulting clusters obtained by applying the VKFCM-K-LP algorithm on a version where missing values are previously imputed by the mean or the median of the observed values.
Figures
Table 16
Fig 10
Fig 1
Fig 2
Fig 3
Table 1
Table 2
Table 3
Table 4
Table 5
Fig 4
Table 6
Fig 5
Fig 6
Table 7
Table 8
Fig 7
Table 9
Table 10
Table 11
Table 12
Fig 8
Fig 9
Table 13
Table 14
Table 15
Table 16
Fig 10
Fig 1
Fig 2
Fig 3
Citation: Rodrigues AKG, Ospina R, Ferreira MRP (2021) Adaptive kernel fuzzy clustering for missing data. PLoS ONE 16(11): e0259266. https://doi.org/10.1371/journal.pone.0259266
Editor: Afnizanfaizal Abdullah, University of Technology Malaysia: Universiti Teknologi Malaysia, MALAYSIA
Received: June 4, 2021; Accepted: October 17, 2021; Published: November 12, 2021
Copyright: © 2021 Rodrigues et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All codes and dataset used to produce the results of the paper are available without restriction and documented in the open repository at Github. Visit the website: https://github.com/AnnyKerol/clustering_for_missing_data.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
The incessant increase in volume and variety of data requires advances in methodologies in order to understand, process and summarize data automatically. Cluster analysis is one of the main unsupervised techniques that are used to extract knowledge from data, due to its ability to aid in the process of understanding and visualizing data structures [1, 2].
The main goal in clustering is to organize the data (observations, data items, images, pixels etc.) based on similarity (or dissimilarity) criteria such that observations belonging to the same group show high degrees of similarity, while observations in different groups show high degrees of dissimilarity [3, 4].
Clustering methods are widely used in many areas of knowledge, such as taxonomy, data mining, image segmentation, pattern recognition, information retrieval, computer vision, and so forth [3, 5]. Depending on the application considered, the groups obtained in clustering may present different characteristics. Thus, different clustering techniques have been proposed in the literature, with the most popular ones being based on hierarchies and partitions. In hierarchical clustering algorithms, structures are found such that they can be recursively divided into levels. The output is a nested sequence of partitions of the input data known as a dendrogram [6].
In partitioning clustering methods, a single partition of the dataset is obtained, generally based on the optimization of a suitable objective function [5]. These methods are more flexible than the hierarchical ones because they allow observations to change groups at each step of the algorithm, if that change leads to a better solution in terms of the variability of the resulting partition. Partitioning clustering methods can be divided into two main branches: hard (or crisp) and fuzzy (or soft). In hard clustering methods, the groups are naturally disjoint, that is, the dataset is partitioned into a predefined number of groups and overlapping is not allowed, which means that each instance may belong exactly to one cluster.
In real world applications, group boundaries are often difficult to define, as it is complex to find reasonable criteria that include some data objects in a cluster, but exclude others. Trying to solve this problem, methods that allow more flexible criteria, such as fuzzy clustering algorithms, were proposed in the literature. In fuzzy clustering, an instance may belong simultaneously to all clusters with a certain membership degree [7, 8]. Fuzzy clustering methods offer good capability to handle noisy/missing data, which is a common problem in different areas, including microarray data analysis [3, 4, 9–11].
The most important component of any clustering algorithm is the dissimilarity (or similarity) measure. Distances are important examples of dissimilarity measures and the Euclidean distance is the most commonly used in the clustering literature. The Fuzzy C-Means (FCM) method [12] is one of the most popular clustering algorithms and it is based on the Euclidean distance. Algorithms that are based on this distance achieve good results when applied to datasets in which groups are approximately hyperspherical and approximately linearly separable [13]. In the opposite situation, i.e. clusters with non-hyperspherical shapes and/or linearly non-separable patterns), these algorithms may have poor performance and find unrepresentative clusters.
The seminal work by Girolami [14] introduces the kernel K-means algorithm that generalizes several clustering methods [15] that produce hypersurfaces with nonlinear separation between groups, such as the Kernel Fuzzy C-Means [5, 16, 17], Kernel-based Self-Organizing Maps (SOM) [18, 19], Kernel Neural Gas [20] and Kernel Subtractive Clustering [21, 22]. Several studies have shown the superiority of the kernel-based clustering methods in a variety of real-world problems [23–27].
The use of kernel functions allows an arbitrary nonlinear mapping ϕ from the original p-dimensional space of the dataset to a higher-dimensional (possibly infinite) space, called a feature space . The purpose of this transformation is that by moving to higher dimensions it may be possible to obtain more defined and linearly separable groups [28]. The advantage and, at the same time, the main idea of methods based on kernel functions is that inner products in the feature space can be expressed as a Mercer kernel [14, 29]. Two main approaches have guided the development of kernel-based algorithms: kernelization of the metric, in which the cluster prototypes are obtained in the original space and the distances between instances and cluster prototypes are computed by means of kernels; and clustering in feature space, in which cluster prototypes are obtained in the feature space [17].
Research studies have shown that clustering methods based on kernel functions perform better than traditional methods, as they are able to produce nonlinear differentiable hypersurfaces of separation between groups [5, 17]. However, in most domains, especially if we are dealing with high-dimensional datasets, some variables may be irrelevant for the construction of the groups, and some among the relevant may be less important than others in relation to a specific group. Ferreira et al. [13] proposed a family of methods based on kernel functions with automatic weighting of variables. These methods were derived based on kernelized adaptive distances that change at each algorithm iteration and can be different for each group or common to all groups. In this context, the Kernel Fuzzy C-Means clustering under the kernelization of the metric approach with local adaptive distances was considered, assuming the constraint that the product of the weights of the variables on each cluster must be equal to one. In this work, we labeled this algorithm as VKFCM-K-LP.
Ferreira, et al. [13] focused on developing methods that are able to better describe the structures of groups in data, however, they did not investigate the performances of the algorithms in the context of missing data. In real world applications, many inferential procedures have to deal with the problem of missing data. There are several reasons for this problem, including imperfect manual data entry procedures, incorrect measurement and equipment measurement errors, among others [30].
In many areas, such as Industry and Medicine, it is common to find datasets that have up to 50% or more of missing values [31, 32]. Extensive research has been done to study the problem of missing data, and the reason for this is the fact that many statistics were originally developed for datasets with no missing values, and even a small amount of them in the dataset can cause serious problems in analysis and decision making. This is enough to motivate the need to develop efficient mechanisms to deal with incomplete data [33].
The development of statistical methods to deal with incomplete data has been the subject of research for decades [34–36]. Green et al. [37] assessed two alternatives for dealing with missing values: Imputation, in which the missing values are estimated through the values observed in the dataset, of which the most popular techniques are Average Imputation or Median Imputation; and Exclusion, where observations that contain missing values are excluded from the dataset. Although simple, these alternatives can produce biased estimates through the reduction of the size of the dataset and by replacing these missing values with estimates [35]. A more effective approach can be to adapt traditional data analysis to deal with incomplete data.
Several approaches have been introduced in an attempt to extend the clustering techniques in the presence of missing values. One of the first attempts was an approach based on probabilistic assumptions to handle missing data in order to perform pattern recognition [38] introduces an approach based on probabilistic assumptions to handle missing data. The Expectation-Maximization (EM) algorithm was used to deal with incomplete data in clustering [39]. Several methods have been proposed to adapt the FCM method to deal with missing data [40]. Wagstaff [41] proposed the K-means method with Soft Constraints (KSC) and Poddar et al. [42] examine clustering data with missing entries using non-convex fusion penalties.
Hathaway [43] proposed strategies to deal with missing values in cluster analysis using the FCM method. Li et al. [44, 45] proposed the FCM clustering method based on nearest-neighbor observations and extended the FCM method by adding a variable weighting process to handle incomplete data, in which the weight of each attribute is seen as an additional variable to be optimized simultaneously in clustering. Recently, Li et al. [46] introduced a kernel method to cluster datasets with missing values in the scope of imputation of observations.
In this work, we adapted the VKFCM-K-LP clustering methods [13, 43] to deal with missing data. The first strategy, called Whole Data Strategy (WDS) performs clustering only on the complete part of the dataset, which means that, in this first strategy, the instances that contain any missing value are excluded from the analysis. The WDS can be applied as long as the amount of missing values does not exceed a percentage of 25% of all observed values. The second approach uses the Partial Distance Strategy (PDS), in which partial distances are computed among all available resources and then re-scaled by the reciprocal of the proportion of observed values. The third technique, called Optimal Completion Strategy (OCS), computes missing values iteratively as auxiliary variables in the optimization of a suitable objective function.
In the evaluation of the VKFCM-K-LP method under the WDS, PDS and OCS approaches, we considered artificially generated datasets with 5%, 10%, 15% and 20% of missing values. The results of the analyzes were quantified according to the following quality measures: the Corrected Rand index (CR), F-measure (FM), the Overall Error Rate of Classification (OERC) and the measure of consistency of variables for the OCS [47–50]. In addition, the results of the clustering under OCS were compared with the results of the clustering using the imputation methods via the mean and the median values.
The rest of the paper is structured as follows. In Section 2 the basic theory about kernels is briefly presented. Section 3 describes the conventional kernel fuzzy C-means (KFCM) algorithm under the kernelization of the metric approach. Section 4 presents the kernel-based fuzzy clustering with variable weighting via local adaptive distances under the kernelization of the metric approach (VKFCM-K-LP). Section 5 introduces the main approach to analyze missing data. New VKFCM-K-LP algorithms under the WDS, PDS and OCS schemes are proposed in Section 6. Section 7 proposes the experimental design. Section 8 contains the results of several numerical evaluations. Finally, Section 9 offers some concluding remarks.
2 Theoretical background
This section describes the basic theory about kernels. The main idea behind kernel-based methods is the use of an arbitrary nonlinear mapping ϕ from the original space of the input data to a space of higher dimension (possibly infinite), called feature space .
Let X = {x1, x2, …, xn} be a non-empty set with . A function is a Mercer Kernel, if and only if, is symmetric, i.e. and the following inequality is valid [29]:(1)where, . Each Mercer Kernel can be expressed as:(2)in which, performs a nonlinear mapping from the original space of X to the space of high-dimensional features .
One of the most relevant aspects in the application of Kernel-based methods is the possibility to calculate Euclidean distances in without having to explicitly specify the non-linear mapping ϕ [51, 52].
This can be done using the so called distance Kernel trick [52, 53]:(3)where, the calculation of the distances in the feature space is a function of the input vectors. Kernel functions [54] typically used are:
* Linear: ,
* Polynomial of degree d: , γ > 0, θ > 0, ,
* Gaussian: , σ > 0,
* Laplacian: , γ > 0,
where, γ, θ, σ and d are Kernel parameters. In the literature, Kernel-based clustering methods can be divided into two main categories, kernelization of the metric [16, 55] and clustering in feature space [56]. However, in this work, we consider only the kernelization of the metric approach. Under this approach, clustering methods seek for prototypes in the original space of the input data and the distances between a data point xi and the prototype of the k-th group vk are obtained by means of kernel functions:(4)
3 Kernel fuzzy C-means (KFCM)
Let Ω = {1, …, n} be a set of n observations indexed by i and described by p variables. Let P = {P1, P2, …, Pk} be a partition of Ω in K groups. The purpose of the Kernel fuzzy C-Means clustering method under kernelization of the metric is to minimize the following objective function(5)where is the prototype of the k-th cluster, k = 1, …, K, uki is the fuzzy membership degree of the observation i to the k-th cluster, k = 1, …, K, i = 1, …, n and is a parameter that controls the fuzziness of the membership for each observation i. Here, is the fuzzy partition matrix. Deriving prototypes for the clusters depends on the choice of the kernel function. When considering the Gaussian Kernel, the most popular in literature, we have that , for all i = 1, …, n. Thus, the objective function described in Eq (5) can be expressed as in Graves et al. [57] by Eq (6):(6)therefore the equation of the cluster prototypes is defined for k = 1, …, K as(7)When updating the fuzzy partition matrix U, the prototypes vk are kept fixed and we need to find the fuzzy membership degrees uki (k = 1, …, K, i = 1, …, n). Using the Lagrange multipliers for the optimization process of the objective function J, subject to the restrictions in Eq (5), we have the following solution [57]:(8)
4 Kernel-based fuzzy clustering with automatic variable weighting via local adaptive distance
Kernel-based clustering methods commonly found in the literature, such as the kernel Fuzzy C-Means [58], do not take into account the weights or the relevance of each variable in the clustering process. However, for the majority of the datasets, and especially if we are dealing with high-dimensional data, some variables may be irrelevant, and, among the relevant variables, some may present greater or lesser importance than others. Moreover, different groups can have different sets of relevant variables. Motivated by this problem, Ferreira et al. [13] proposed a family of kernel-based fuzzy clustering methods with automatic weighting of variables, which are clustering algorithms in which dissimilarity measures are obtained as sums of Euclidean distances between patterns and cluster prototypes computed separately for each variable. The main idea supporting these methods is that the sum of kernel functions applied on each variable is also a kernel function. This reasoning enables the introduction of weights representing the relevance of each variable.
The clustering method VKFCM-K-LP takes into account the weights or the relevance of each variable for the construction of the clusters [13]. This clustering method is based on a kernelized local adaptive distance with the constraint that the product of the weights of the variables on each cluster must be equal to 1. The algorithm considers a separate weight vector for each cluster in order to parameterize its local distances. Then, the closer the observations are to the prototype of a given cluster with respect to a given variable, the greater its importance to this cluster. The restrictions on the weight vector in the VKFCM-K-LP method are based on hard clustering via adaptive distances and on fuzzy quadratic distances [59, 60].
Result 1 (Scholkopf and Smola [53]) If and are kernel functions, then the sum, is a kernel function defined in (X1 × X2) × (X1 × X2), where , and .
Under this result, if an instance is represented by a vector with p variables, we can partition it into up to p parts, and consider up to p different kernel functions, one for each part. Formally, we have that , where are Kernel functions and Xj is the the space of the j-th variable with j = 1, …, p. Therefore, a distance based on kernelizing the metric between an instance xi and the k-th prototype vk with respect to the j-th variable [51, 52] is defined by(9)in which ϕj j = 1, …, p is a non-linear mapping of xi ∈ X, into the feature space concerning the j-th variable. In Eq (9) it is possible to introduce weights representing the relevance of each variable. Let φ2(xi, vk) be a distance measure based on kernelization of the metric between an observation xi and the prototype vk of the k-th cluster. Thus, the local adaptive distance φ2(xi, vk) with the restriction that the product of the weights of the variables in each cluster [61] is equal to 1, is given by(10)where λk = (λk1, …, λkp) is the vector of weights for the k-th cluster. Given Eqs (9) and (10) we can define an objective function J that measures the fit between the clusters and their prototypes, given by(11)subject to the constraints given in Eq (5), where uki is the fuzzy membership degree for observation i in the k-th cluster k = 1, …, K, i = 1, …, n and is the prototype of the k-th cluster.
When considering the Gaussian Kernel the objective function described in the Eq (11) is rewritten as(12)While deriving cluster prototypes, the fuzzy membership degrees and the weights of the variables are kept fixed. Therefore, the prototype of the k-th cluster vk = (vk1, …, vkp) (k = 1, …, K) that minimize criterion J in Eq (12) has its components vkj (j = 1, …, p) defined by(13)in which, t = 1, …, T where T is the maximum number of iterations. The next step is to determine the weights of the variables. To do so, the fuzzy membership degrees uki and the cluster prototypes vk are kept fixed. The weight vector λk = (λk1, …, λkp) that minimizes criterion J, under restrictions λkj > 0 ∀kj and , ∀k, has its components λkj (j = 1, …, p, k = 1, …, K) given by(14)While updating the fuzzy membership degrees, the prototypes of the clusters vk and the weights of the variables are kept fixed. Therefore, the fuzzy membership degrees that minimize criterion J, given in Eq (5), are updated according to the following expression(15)where φ2(xi, vk) is defined in Eq (10). Algorithm 1 shows the steps of the VKFCM-K-LP method. The convergence properties of the method were demonstrated in the work of [13].
Algorithm 1: VKFCM-K-LP clustering method
1: Initialization
Fix K (number of clusters), 2 ≤ K < n; fix m, 1 < m < ∞; fix T (number of iterations);
and fix ϵ, 0 < ϵ < 1. Randomly initialize the fuzzy membership degrees uki with the restrictions given in Eq (5);
Uniformly initialize all weights as 1/p.
Do t = 1.
2: Update prototype vector vk according to Eq (13).
3: Update weight vector λk according to Eq (14).
4: Update fuzzy membership degree uki
given in Eq (15).
5: IF |Jt+1 − Jt| ≤ ϵ or t > T
STOP
ELSE do t = t + 1 and go to step 2.
5 Incomplete data analysis
Data quality is one of the most important factors that can affect the results of statistical analysis. Problems during data collection or pre-processing can generate uncertain values, incorrect or even absent values. Data analysis with missing data is a problem often discussed in many areas of science, because these analyses were originally designed for datasets without missing values. Although the causes of missing data are diverse in the literature, there are few missing data patterns resulting from the missing values in the datasets. The missing data pattern describes which values are observed and which values are absent from the dataset [35].
Generally, the most common missing data patterns are the multivariate, monotone, general and file-matching patterns [35]. In the multivariate pattern (Fig 1a), missing values occur in a group of attributes that are completely observed or missing. The monotone pattern (Fig 1b) usually occurs as a result of longitudinal studies and has a ladder-like arrangement of values when organized in a data matrix. The file-matching pattern (Fig 1d) occurs when the data are obtained from several different sources and, consequently, the combined dataset will have fully observed attributes and features that are not jointly observed.
[Figure omitted. See PDF.]
Fig 1. Types of missing data patterns.
(a) Multivariate. (b) Monotone. (C) General. (d) File-matching.
https://doi.org/10.1371/journal.pone.0259266.g001
In the general pattern (Fig 1c) the missing values are characterized by an arbitrary form in the dataset and can be observed in practice for example, in the omission of responses in a questionnaire or loss of data in pre-processing.
Although missing data patterns describe what values are missing from the dataset, missing data generation mechanisms provide information about the occurrence of these values. Missing data generation mechanisms refer to the relationship between the missing value and the attribute values of the variables in the dataset. Therefore, whereas a missing data pattern indicates what values in the dataset can be used for statistical analysis, mechanisms provide an indication of how the available values should be treated during data analysis to obtain the best results.
The first works that deal with missing data generation mechanisms were proposed by Rubin [34] and are still used today. These mechanisms are known as: Missing Completely at Random (MCAR), Missing at Random (MAR) and Not Missing at Random (NMAR) and describe the relationship between the analyzed variables and the percentage of missing values in the data matrix [62, 63]. In this work, we focus on strategies for dealing with missing data of the MCAR type [35]. Let X = {x1, …, xn} be a data matrix and define the p-dimensional vector xi = {xi1, xi2, …, xip}, for 1 ≤ i ≤ n and 1 ≤ j ≤ p, where xij is the j-th variable of the i-th observation. We can rewrite X as X = Xobs ∪ XM, where Xobs = {xij}, if this value is observed in X, and XM = {xij = NA} if this value is missing in X. In this context, we define a missing indicator matrix M = [mij] that shows if the observation value xij is missing (mij = 1) or if xij is observed (mij = 0). The missing data generation mechanism is defined as the conditional probability of M given X, P(M|X, θ), where θ denotes the unknown parameters of a given probability distribution. Missing values are defined as MCAR if a missing value does not depend on the dataset. Formally, this mechanism is defined as:(16)From a practical perspective, missing data mechanisms operate as assumptions that dictate which techniques should be used to deal with these values [62].
5.1 Handling missing values
Traditionally, researchers use a wide variety of techniques to handle missing values. However, the best method would be to avoid having these values in the dataset, through better experiment mapping or repeated data collection. Nonetheless, investigating why these values are absent and taking corrective measures can become impracticable or impossible. Therefore, it is usually more feasible to adopt techniques that deal with missing values in the data matrix. There are three common approaches in the literature to manipulate missing values [35]:
* Elimination: This technique is best used when the percentage of missing values in the dataset is relatively small. The approach is to ignore missing data items or the attributes that contain those values. Therefore, data analysis is performed on the set of available data, called Complete-Case Analysis (CCA). The main advantage of exclusion is that it produces a complete dataset, which in turn allows the use of standard data analysis techniques [62]. The disadvantage of this technique is that the sample size can be drastically reduced, especially for datasets that include a large proportion of missing data.
* Imputation: This approach, which is called Imputation of Missing Values (IMV), consists of replacing the missing values with estimated values that are generally derived from the available data. IMV techniques range from simple methods, such as replacing missing values with the Mean or the Median value, to more sophisticated ones that use Regression, Maximum Likelihood and other statistical methods [63]. The disadvantage of this approach is that the quality of the results of the data analysis can be affected by the imputation, since imputed values are treated as observed values. As an advantage, standard analysis techniques can be used since the missing values have been filled.
* Adaptation of data analysis methods to incomplete data: An effective approach is to adapt data analysis methods so that they can handle datasets that have missing values. These methods include estimating missing values during data analysis and distinguishing between observed and imputed values. The main advantage of the adaptation approach is that all observed data can be used for data analysis, avoiding the disadvantages of imputing the missing values.
6 Adapting the VKFCM-K-LP algorithm to handle missing data
The VKFCM-K-LP clustering method [13] cannot be applied directly to datasets with missing values. As with most clustering methods, VKFCM-K-LP requires all values in the data matrix to be present, in order to calculate prototypes and distance measurements. Several methods have been proposed in the literature to deal with incomplete data, such as Hathaway et al. [43], who proposed three strategies to group incomplete data using the Fuzzy C-Means algorithm (FCM). In this Section, we use these three approaches to adapt the VKFCM-K-LP clustering algorithm to deal with incomplete data.
6.1 Whole Data Strategy (WDS)
This strategy consists of omitting the incomplete data items and applying the VKFCM-K-LP algorithm to the resulting complete data matrix [43]. This method is an example of CCA, since the missing values are not included in the calculation of the cluster prototypes, and can be applied when the percentage of missing data is relatively small. It is generally suggested that WDS can be considered when the percentage of missing values is less than 25% of all values in the dataset [43]. However, incomplete observations are not completely excluded from the analysis. At the end of the clustering process using the complete dataset, incomplete data are partitioned using the nearest-prototype scheme based on Partial Distances (PD) computed from each incomplete instance to each cluster prototype. The PD function calculates the sum of the squared (kernelized) Euclidean distances between all available observations (i.e. non-missing) and then weights them by the proportion of values used in their calculation. Algorithm 2 describes the steps for WDS.
Algorithm 2: VKFCM-K-LP clustering method with the WDS strategy.
1: Initialization
Fix K (number of clusters), 2 ≤ K < n; fix m, 1 < m < ∞;
fix T (number of iterations); and fix ϵ, 0 < ϵ < 1.
Randomly initialize the fuzzy membership degrees uki;
Uniformly initialize all weights as 1/p.
Do t = 1.
2: Update prototype vector vk according to Eq (13).
3: Update weight vector λk according to Eq (14).
4: Update fuzzy membership degree uki using Eq (15).
5: IF |Jt+1 − Jt| ≤ ϵ OR t > T
Partition XM according to Eq 17
STOP
ELSE do t = t + 1 and go to step 2.
6.2 Partial Distance Strategy (PDS)
Dixon [64] recommends the partial distance strategy in cases when XM is sufficiently large and WDS cannot is not recommended. PDS consists of estimating the distance between two observations using the Partial Distance function. In VKFCM-K-LP, which uses a local adaptive kernel distance, its partial version is given by(17)where for 1 ≤ i ≤ n and 1 ≤ j ≤ p. The indicator function Iij is defined by(18)where Xobs and XM are defined in Section 5. Therefore the objective function for this strategy is given by(19)where , and , which is defined in Eq (17), is called Local Adaptive Partial Kernel with the constraint given in Eq (10).
In the first iteration of the VKFCM-K-LP algorithm, prototypes and weights are updated using only the values in Xobs. Prototypes are given by(20)where is the Gaussian Kernel. The weights of the variables are obtained by minimizing the objective function given in Eq (19), which gives Eq (21).(21)for 1 ≤ k ≤ K and 1 ≤ l ≤ p. The scale factor p/Ii in Eq (17) has no effect on the calculation of prototypes [43] in Eq (20) and consequently it does not affect the weight calculation in Eq (21). This scale factor also has no effect on uki, which is calculated using Eq (22), because it appears both at the top and at the bottom of the equation and can be omitted from the partial distance given in Eq (17).(22)The steps of the PDS version VKFCM-K-LP are listed in Algorithm 3.
Algorithm 3: VKFCM-K-LP clustering method with the PDS strategy.
1: Initialization
Fix K (number of clusters), 2 ≤ K < n;
Fix m, 1 < m < ∞; fix T (number of iterations); and fix ϵ, 0 < ϵ < 1.
Randomly initialize the fuzzy membership degrees uki;
Uniformly initialize all weights with 1/p.
Do t = 1.
2: Update prototype vector vk, according to Eq (20).
3: Update weight vector λk according to Eq (21).
4: Update fuzzy membership degree uki using Eq (22).
5: IF |Jt+1 − Jt| ≤ ϵ OR t > T
STOP
ELSE do t = t + 1 and got to the step 2.
6.3 Optimal Completion Strategy (OCS)
The main idea of this strategy is to iteratively calculate the missing values in XM as auxiliary variables in the optimization of the objective function JM [43] defined in Eq (23).(23)in which(24)Prototype vkj and weight λkj are defined according to Eqs (13) and (14). Thus, the missing values are updated by minimizing Eq (25).(25)Thus, the missing value xij ∈ XM is given by Eq (26) as described in [43].(26)where membership degree uki is defined as in Eq (15) and 1 ≤ i ≤ n and 1 ≤ j ≤ p. In this strategy, missing values are imputed by the weighted averages of all prototypes at each iteration. Moreover the missing values XM are initialized using random values. The expression in Eq (26) is obtained through the partial derivatives of the objective function given in Eq (23), by fixing prototypes, weights and memberships. Algorithm 4 describes the steps of the VKFCM-K-LP method under the OCS approach. The advantage of this approach is that the missing values are allocated during the clustering process.
Algorithm 4: VKFCM-K-LP clustering method with the OCS strategy.
1: Initialization
Fix K (number of clusters), 2 ≤ K < n; fix m, 1 < m < ∞;
fix T (number of iterations); fix ϵ, 0 < ϵ < 1.
Randomly initialize XM;
Randomly initialize the fuzzy membership degrees uki with the restrictions given in (5);
Uniformly initialize all weights as 1/p;
Do t = 1.
2: Update prototype vector vk according to Eq (13).
3: Update weight vector λk according to Eq (14).
4: Update fuzzy membership degree uki according to Eq (15).
5: Update xij ∈ XM according to Eq (26)
6: IF |Jt+1 − Jt| ≤ ϵ OR t > T
STOP
ELSE do t = t + 1 and go to step 2.
7 Experimental design
The performance of the VKFCM-K-LP method proposed by [13] has not been evaluated in the context of incomplete data. Thus, this work adapted VKFCM-K-LP using the three strategies defined by [43] to handle missing data. To evaluate the methods, we implemented a missing value generator, in order to create reproducible datasets with absent values on which the methods presented in this work can be evaluated. The implementation of the missing data generation mechanism and the graphical representations were performed with the aid packages offered by R [65]. The main R packages used were ggplot2, VIM and naniar. The clustering methods were implemented using C. Experiments ran on an Intel Core (TM) I3-3217U CPU, clocking at 1.80GHz, with 4GB of RAM, using the Linux operating system. The code and data for reproducing the results here reported are available in the following repository: https://github.com/AnnyKerol/clustering_for_missing_data.
Three external indices were used to compare clustering results: Corrected Rand index (CR) [47], F-measure [48] and Overall Error Rate of Classification (OERC) [49]. The CR index takes its values from the interval [−1, 1], in which 1 indicates perfect agreement between partitions, whereas values near 0 (or negatives) correspond to cluster agreement found by chance [47]. F-measure takes its values from the [0, 1] interval, in which 1 indicates perfect agreement between partitions. OERC aims to measure the ability of a clustering algorithm to find original classes present in a dataset and takes its values from the [0, 1] interval, in which lower OERC values indicate better clustering results.
At the end of the clustering process of the VKFCM-K-LP method under the OCS approach, we obtained a complete dataset, which resulted in the best values of CR, OERC and F-measure. To verify if the values imputed by OCS resemble each variable’s distribution; we calculated a consistency measure [50] defined by(27)where k denotes the k-th cluster, 1 ≤ j ≤ p and p represents the variables to be analyzed and μp0 and are the mean and variance of the dataset with missing values, respectively. Additionally, μp1 and refer to the mean and variance of the dataset with imputed values. The better the clustering under the OCS approach, the closer the values given by Eq (27) are to zero, which indicates that the imputed values were consistent in relation to the original scales of the variables in the dataset with missing values.
7.1 Missing data generation
The missing value generator used in this study removes values from the complete dataset with a given probability, according to the MCAR mechanism. In the generation of missing values of the MCAR type [35], we assume independence in the joint distribution of (xi, M), therefore, the probability that an xij value is observed is independent of the values in X or M. Consider a Bernoulli distribution with parameter θ, 0 ≤ θ ≤ 1, for the indicator variable Mi, with probability P(Mi = 1|xi, θ), given that xi is a missing value. If the missing values are independent from X, P(Mi = 1|xi, θ) = θ. Since the constant is independent of the values in X, this results in the generation of the MCAR type mechanism.
In computational terms, a complete dataset X is selected, and subsequently modified to obtain an incomplete dataset, by randomly selecting a specified percentage of its components {xij} that are assigned as missing values. The {xij} values are taken as missing when element mij from the sample generated for the indicator variable M is equal to one, i.e., mij = 1. Therefore the value of {xij} is excluded from the complete dataset and designated as a missing value.
8 Results
This section presents an experimental evaluation of the kernel-based fuzzy clustering method with automatic weighting of the variables using local adaptive distances VKFCM-K-LP under the WDS, PDS and OCS approaches. In our experiments, datasets with 5%, 10%, 15% and 20% of missing values were artificially generated using the methodology described in Section 7.1, which means that random variable M was sampled from Bernoulli distributions with parameter θ taken from {0.05, 0.10, 0.15, 0.20}. The clustering algorithms were executed 100 times for each dataset, following a Monte Carlo simulation scheme with random initialization. On each Monte Carlo iteration, the adjustment between clusters and prototypes is observed until convergence, with a tolerance threshold of ϵ = 10−10 or until a maximum number of iterations is reached, i.e. until t > T with T = 300. At the end of the 100 Monte Carlo replications, we select the best solution according to objective function J.
In order to compare the models, we calculated CR, FM and OERC on their best solutions. The averages and standard deviations of these measures are also calculated across the 100 repetitions of each algorithm. The number of groups K was defined as equal to the known number of classes of each dataset. Parameter m was set as 2.0, following a previous study [13]. The terms , {j = 1, …, p}, of the Gaussian Kernel functions, were estimated as the average between the 0.1 and 0.9 quantiles of ‖xij − xkj‖2 for i ≠ k; i, k = 1, …, n [13, 61].
Additionally, we calculated the consistencies of the variables in the complete datasets when evaluating the VKFCM-K-LP method under the OCS approach and we compared the clustering with the OCS method and the clustering using the imputation of missing values using Mean and Median values. To show the effectiveness of the VKFCM-K-LP clustering methods under the WDS, PDS and OCS approaches, we used two datasets: the Iris Plant dataset [66] and the Thyroid Gland dataset [67], both obtained from the Machine Learning Repository at the University of California, Irvine, United States (UCI Machine Learning Repository) [68]. The choice of these datasets is due to the fact that the groups have different structures, in particular the Thyroid Gland dataset presents greater group overlap than the Iris Plant dataset. The performances of the methods in these datasets are described in the following Sections.
8.1 Iris Plant dataset
The Iris Plant dataset [66] is well known and widely used in the area of pattern recognition. This set has three a priori classes (K = 3), each with 50 observations, for a total of 150 instances. The classes correspond to three species of Iris flowering plants: Iris setosa (Class 1), Iris virginica (Class 2) and Iris versicolor (Class 3). For each species, four variables were observed (p = 4), corresponding to flower measurements: Sepal Length (SL), Sepal Width (SW), Petal Length (PL) and Petal Width (PW).
Fig 2a and 2b show the dispersion of the values of the variables for this dataset and the boxplots for each species. It is possible to observe an apparently linear relationship between variables PL and SL and between variables PW and SW for the versicolor and virginica classes. We also note that, considering the versicolor and virginica species, these variables are directly proportional, that is and increase in the value of SL implies an increase in the value of PL and the same is observed for SW and PW. In addition, the three species differ in relation to the variables, especially the setosa species, which is linearly separable from the other two.
[Figure omitted. See PDF.]
Fig 2. Scatter plots and boxplots for the Iris Plant dataset.
(a) Length. (b) Width.
https://doi.org/10.1371/journal.pone.0259266.g002
The boxplots in Fig 2a show higher variability in the data of the virginica species for the SL and PL variables. Fig 2b, on the other hand, shows less variability when considering the SW variable.
Fig 3a and 3b present the missing values patterns that were artificially generated for the Iris Plant dataset, distributed across its four variables. In each plot, the x axis represents the variables and the y axis represents the observations, with the black regions indicating missing values. The Figures also show the number of missing values by variable for each missing percentage, with variable PL having the highest number of missing values for all analyzed percentages. In datasets with 5%, 10% and 20% of missing values, the SW variable has the lowest missing amount. Observations belonging to Class 1 are in the 1|–50 range, while observations belonging to Class 2 are in the 51|–100 range, and, finally, the 101|–150 range represents observations belonging to Class 3.
[Figure omitted. See PDF.]
Fig 3. Visualizations of the patterns and frequencies of the missing values by variable for the Iris Plant dataset.
(a) 5% missing. (b) 10% missing. (c) 15% missing. (d) 20% missing.
https://doi.org/10.1371/journal.pone.0259266.g003
Table 1 shows CR, FM and OERC corresponding to the best solutions obtained in the 100 Monte Carlo replications of the VKFCM-K-LP clustering algorithm with the WDS, PDS and OCS strategies. For all the missing value percentages studied, the CR and FM indices are close to 1, which indicates a good agreement between the a priori classes and the groups provided by the clustering methods. For 5% of missing values, the best performance was observed for the PDS method. However, when analyzing the data with 10%, 15% and 20% of missing values, the PDS method presented the worst performance. In general, increasing the percentage of missing values in the datasets affects the performance of the algorithms, as expected. This behavior is also verified for the PDS approach when increasing the percentage from 5% to 10% and for the WDS and OCS approaches when the percentage goes from 15% to 20%.
[Figure omitted. See PDF.]
Table 1. Performance of the VKFCM-K-LP clustering algorithm with the WDS, PDS and OCS strategies for the dataset Iris Plant.
https://doi.org/10.1371/journal.pone.0259266.t001
Aiming to investigate the predictive power of the VKFCM-K-LP algorithm under the three approaches for handling missing data, Table 2 shows the confusion matrices obtained for each method, and for each percentage of missing values considered.
[Figure omitted. See PDF.]
Table 2. Confusion matrices obtained by the VKFCM-K-LP algorithm with the WDS, EDP and OCS strategies using 5%, 10%, 15% and 20% of missing values.
https://doi.org/10.1371/journal.pone.0259266.t002
In the columns we have the original classes, and in the lines we have the clusters provided by the clustering methods, which were identified as Cluster 1 (setosa), Cluster 2 (virginica) and Cluster 3 (versicolor).
The confusion matrices in Table 2 show that for all clustering methods and for all percentages of missing values considered, observations belonging to the setosa species in the dataset Iris Plant were properly grouped into Cluster 1. This is expected, as this species is separable from the other two species, as shown in Fig 2a and 2b. It can be also noted that Clusters 2 and 3 showed higher numbers of incorrectly clustered observations, which is expected because these groups are not linearly separable as observed for Cluster 1.
Tables 3–5 provide the weights of the variables in each cluster. In general, it is observed that in the three approaches and for all the percentages of missing values, variables PL and PW were the most relevant for the construction of the clusters. Variable PL obtained the greatest relevance in all groups, even with the largest number of missing values, as shown in Fig 3a–3d. However, there is a decrease in the weights of the PL variable with the increase in the percentage of missing values in Cluster 2 for the PDS and OCS methods. This behavior is also observed for the weights of the PW variable in Cluster 1 in the PDS method. For the WDS strategy, as the percentage of missing values increases, variable PW becomes more relevant.
[Figure omitted. See PDF.]
Table 3. Weights of the variables in each group adjusted by the VKFCM-K-LP algorithm with the WDS strategy under different percentages of missing values.
https://doi.org/10.1371/journal.pone.0259266.t003
[Figure omitted. See PDF.]
Table 4. Weights of the variables in each group adjusted by the VKFCM-K-LP algorithm with the PDS strategy under different percentages of missing values.
https://doi.org/10.1371/journal.pone.0259266.t004
[Figure omitted. See PDF.]
Table 5. Weights of the variables in each group adjusted by the VKFCM-K-LP algorithm with the OCS strategy under different percentages of missing values.
https://doi.org/10.1371/journal.pone.0259266.t005
Fig 4 shows the performance results of the OCS, PDS and WDS algorithms in the 100 Monte Carlo repetitions. The WDS strategy had the largest deviations in error rate when compared to the others. For the PDS approach, increasing and decreasing average error rates were observed over the analyzed percentages. In the OCS strategy, there is an increasing error rate, starting from 10% of missing values. This method presents a more defined behavior, i.e. as the percentage of missing values increases, the error rate also increases. The OCS strategy showed the smallest deviations in relation to the average error rate when compared with the WDS and PDS strategies.
[Figure omitted. See PDF.]
Fig 4. Average error rates after 100 repetitions for the Iris Plant dataset.
https://doi.org/10.1371/journal.pone.0259266.g004
Analyzing the measures of variable consistency from Table 6, considering the complete dataset obtained after clustering with the VKFCM-K-LP algorithm, together with the OCS strategy, we have that these measures are very close to zero. This shows a good quality in the grouping, that is, the new values imputed through the OCS strategy were not discrepant in relation to the original scale of the variables of the Iris Plant dataset.
[Figure omitted. See PDF.]
Table 6. Consistency of variables for the dataset Iris Plant.
https://doi.org/10.1371/journal.pone.0259266.t006
8.2 Thyroid Gland dataset
In this Section, we evaluate the three missing data approaches using the Thyroid Gland dataset [67]. This dataset has three a priori classes (K = 3): normal (Class 1) with 150 observations, hyper (Class 2), with 35 observations and hypo (Class 3) with 30 observations. This dataset has n = 215 observations and five variables (p = 5): T3-resin uptake test (T3), Total Serum thyroxin (TTS), Total serum triiodothyronine (TST), basal thyroid-stimulating hormone (TSH) and Maximal absolute difference of TSH value after injection of 200 micrograms of thyrotropin-releasing hormone (DTSH).
Fig 5a and 5b present the dispersion and boxplot graphs for T3 plotted against TST and TST versus TTS. Class 2 is more dispersed than the others, which is evidenced in the boxplots for the analyzed variables. Fig 5b shows a linear relationship between variables TST and TTS for classes 1 and 2. In addition, these classes have less variability when considering the TST variable.
[Figure omitted. See PDF.]
Fig 5. Scatter plots and boxplots for the Thyroid Gland dataset.
(a) TST. (b) TTS.
https://doi.org/10.1371/journal.pone.0259266.g005
Fig 6a–6d show the missing values distributed across the five variables in the Thyroid Gland dataset. Variable T3 presents a greater number of missing values for the 15% and 20% percentages. For all analyzed datasets, the DTSH variable has the smallest amount of missing values. Additionally, the missing values are well distributed among the variables. Observations in the 1|–150 range represent class 1 (normal), the 151|–175 interval corresponds to class 2 (hyper) and finally, interval 175|–215 contains class 3 (hypo) observations.
[Figure omitted. See PDF.]
Fig 6. Graphs of missing value patterns and frequencies per variable for the Thyroid Gland dataset.
(a) 5% missing. (b) 10% missing. (c) 15% missing. (d) 20% missing.
https://doi.org/10.1371/journal.pone.0259266.g006
Table 7 shows the best results among the 100 repetitions of the VKFCM-K-LP algorithm under the three types of strategies for missing data. For 5% of missing values, the best performances were obtained by the WDS method, presenting a CR equal to 0.818 and an FM equal to 0.943, which means there was a good agreement between the a priori classes and the clusters provided by the clustering algorithm. In this context, the OERC measure was equal to 5.5%. For the PDS and OCS strategies, the increase in the number of missing values in the Thyroid Gland dataset influences the quality of the clustering, as there was a decrease in the values of the studied measures. The PDS strategy showed the best performances according to the quality measures analyzed for all percentages of missing values.
[Figure omitted. See PDF.]
Table 7. Performance of the VKFCM-K-LP clustering algorithm with the WDS, PDS and OCS strategies for the Thyroid Gland dataset.
https://doi.org/10.1371/journal.pone.0259266.t007
To build the confusion matrices in Table 8, the clusters provided by the algorithm were identified as Cluster 1 (normal), Cluster 2 (hyper) and Cluster 3 (hypo). The confusion matrices show a great difficulty for the clustering algorithm in identifying Clusters 1 and 3 in all the methods analyzed. These clusters correspond to the normal and hypo classes, which in Fig 5a and 5b are more overlapped when compared to Class 2, which hinders the performance of the clustering method.
[Figure omitted. See PDF.]
Table 8. Confusion matrices obtained by VKFCM-K-LP with the WDS, PSD and OCS strategies using 5, 10, 15 and 20% of missing values.
https://doi.org/10.1371/journal.pone.0259266.t008
Fig 7 presents the average error rates for the 100 repetitions of the VKFCM-K-LP algorithm, with the WDS, PDS and OCS strategies in the Thyroid Gland dataset. The average error rates for PDS and OCS showed an increasing behavior along the percentages of missing values evaluated. For 20% of missing values, the average Total Error Rate of classification for these methods was approximately 0.20.
[Figure omitted. See PDF.]
Fig 7. Average results of 100 repetitions for the error rate with Thyroid Gland dataset.
https://doi.org/10.1371/journal.pone.0259266.g007
The PDS method presents lower error rates from 5% to 15% of missing values, when compared to the OCS strategy. The largest variations are observed in the WDS method for 5% and 15% of missing values. This method obtained an increasing error rate between 5% and 10%, while its error decreased starting from 10%.
The weights of the variables in each cluster, with the WDS, PDS and OCS approaches, listed in Tables 9–11, show that the TST and TSH variables were the most relevant to compose Cluster 1. For Cluster 2, the most important variables were TSH and STD and, for Cluster 3, the most relevant variables were TTS and TST. In addition, in the WDS strategy the TTS and TSH variables were more relevant for the construction of Clusters 3 and 2 respectively, as the number of missing values increased. This behavior is also observed for the TST variable in Cluster 1 for the PDS and OCS strategies. In contrast, with the increase in the number of missing values in the DTSH variable, there was a decrease in its importance for the construction of Cluster 2 with the OCS strategy.
[Figure omitted. See PDF.]
Table 9. Weights of the variables in each group found by the VKFCM-K-LP algorithm with the WDS strategy under different percentages of missing values.
https://doi.org/10.1371/journal.pone.0259266.t009
[Figure omitted. See PDF.]
Table 10. Weights of the variables in each group found by the VKFCM-K-LP algorithm with the PDS strategy under different percentages of missing values.
https://doi.org/10.1371/journal.pone.0259266.t010
[Figure omitted. See PDF.]
Table 11. Weights of the variables in each group found by the VKFCM-K-LP algorithm with the OCS strategy under different percentages of missing values.
https://doi.org/10.1371/journal.pone.0259266.t011
In order to assess the consistency of the variables in each cluster, as shown in Table 12, we used the datasets before and after clustering, with the missing values imputed using the OCS strategy. In this context, the consistencies obtained for the variables in the groups were close to zero, which indicates a good performance of the OCS method when imputing the missing values. Additionally, the greatest consistencies were found in Clusters 2 and 3 for all percentages of missing values evaluated.
[Figure omitted. See PDF.]
Table 12. Consistencies of variables for the Thyroid Gland dataset.
https://doi.org/10.1371/journal.pone.0259266.t012
8.3 Comparison between imputation methods
This Section compares VKFCM-K-LP using the OCS method with Imputation via Mean and Median. Fig 8a and 8b show the accuracies obtained using the OCS method and by Imputation via Mean and Median for the Iris Plant and Thyroid Gland datasets, when the amount of missing values varies from 5 to 20%.
[Figure omitted. See PDF.]
Fig 8. Performance graphs of the methods for different percentages of missing values.
(a) Iris Plant. (b) Thyroid Gland.
https://doi.org/10.1371/journal.pone.0259266.g008
For the Imputation via Mean and Median values, missing values are filled using the mean or median estimates of the observed values in the datasets for each variable, before applying the clustering algorithm.
For the Iris Plant dataset with 5% of missing values, accuracy was close to 0.90, which shows a good performance of the methods when imputing missing values. However, for the Thyroid Gland dataset, considering the same percentage of missing values, there are differences in accuracy as shown in Fig 8a. This difference between the two datasets is expected, because the classes in the Thyroid Gland dataset are more overlapped than the classes in the Iris Plant dataset. In order to visualize and understand the data overlap, we applied Principal Component Analysis (PCA). Fig 9a and 9b show the resulting projections for the first two components. In PCA, the components are orthogonal and sorted according to how much variance they explain, so it is possible to identify patterns and extract features [69]. Even after applying PCA, the classes in the Thyroid Gland dataset are more overlapped than the classes in the Iris Plant dataset. This makes it harder to group observations in the Thyroid Gland dataset. This difficulty is accentuated with the increase in the number of missing values in the dataset as shown in Fig 8b. It is also worth mentioning that classes overlap less in the Iris Plant dataset, which favors the performance of the VKFCM-K-LP clustering method with the OCS strategy, even when the percentage of missing values increases, as shown in Fig 8b.
[Figure omitted. See PDF.]
Fig 9. Principal component analysis applied to both datasets.
(a) Iris Plant. (b) Thyroid Gland.
https://doi.org/10.1371/journal.pone.0259266.g009
For 10%, 15% and 20% of imputed missing values, the clustering accuracies with the imputation by Mean and Median values in the Thyroid Gland dataset are concentrated around very similar values, which does not happen in the Iris Plant dataset (Fig 8a).
Tables 13–16 show the consistencies of the variables with the imputation of the missing values via Mean and Median for the two analyzed datasets. Consistencies obtained by the Mean and Median strategies were higher than the OCS strategy, as shown in Tables 6–12. This means that Mean and Median imputations depart more from the original scale of the variables in the two datasets than values obtained by the OCS approach.
[Figure omitted. See PDF.]
Table 13. Consistency of variables in the VKFCM-K-LP clustering with the imputation of missing values using mean values for the Iris Plant dataset.
https://doi.org/10.1371/journal.pone.0259266.t013
[Figure omitted. See PDF.]
Table 14. Consistency of variables in the VKFCM-K-LP clustering with the imputation of missing values using median values for the Iris Plant dataset.
https://doi.org/10.1371/journal.pone.0259266.t014
[Figure omitted. See PDF.]
Table 15. Consistency of variables in the VKFCM-K-LP clustering with the imputation of missing values using mean values for the Thyroid Gland dataset.
https://doi.org/10.1371/journal.pone.0259266.t015
[Figure omitted. See PDF.]
Table 16. Consistency of variables in the VKFCM-K-LP clustering with the imputation of missing values using median values for the Thyroid Gland dataset.
https://doi.org/10.1371/journal.pone.0259266.t016
To show the dispersion of the new values imputed using the strategies mentioned above, variables T3 and TST were selected from the Thyroid Gland dataset with 5% and 15% of missing values. The T3 and TST variables were those that obtained the highest number of missing values in the generation process (see Fig 6b and 6c). Therefore, it is important to graphically visualize the relationship of these imputed values with the ones in the dataset, as shown in Fig 10a–10f.
[Figure omitted. See PDF.]
Fig 10. Scatter plots and boxplots for the Thyroid Gland dataset considering the different imputation methods.
(a) Imputation via OCS with 5% of missing values. (b) Mean imputation with 5% of missing values. (c) Median imputation with 5% of missing values. (d) Imputation via OCS with 15% of missing values. (e) Mean imputation with 15% of missing values. (f) Median imputation with 5% of missing values.
https://doi.org/10.1371/journal.pone.0259266.g010
The box-plots for the imputed values and the complete values (blue color) are also shown. The red and yellow colors represent imputed values for the T3 and TST variables respectively. If the values are imputed to both variables, they are colored black. Fig 10a–10c show that, with 5% of imputed missing values, most of the resulting points are close to the distribution of the complete data. The boxplots of the imputed values for the TST variable show higher similarity with the boxplots of the complete values, that is, the dispersion of the data before and after the imputation did not present significant discrepancies.
Regarding the T3 variable, the imputed values showed less variability than the complete observations. For datasets with 15% of imputed values (Fig 10d–10f), the values obtained by the OCS strategy showed better distribution in the groups compared to the values imputed with Mean and Median estimates.
In the imputation of missing values via Mean and Median, there is a concentration in the same value, forming a straight line with zero slope. Thus the set of values imputed through these methods has zero correlation between variables T3 and TST. Tabachnick et al. [70] argue that the imputation of missing values with central tendency measures such as the average, affects the correlation between the variables and the variance is underestimated.
Indeed when analyzing the correlations of variables T3 and TST, we obtain ρ = −0.528 and ρ = −0.529, after imputation with Mean and Median, respectively. Meanwhile, the correlation for the original set (without missing values) is ρ = −0.536. The variability of the data is also impaired, as the standard deviations (sd) for the T3 and TST variables in the complete dataset were sd = 13, 145 and sd = 1, 419, respectively, while for the set with 15% of imputed values, the standard deviations are sd = 11.87 and sd = 1.35 respectively, which indicates variance underestimation.
Therefore, although the imputations using Mean and Median values are easy to implement, the resulting clusterings are not satisfactory, since the structure of the correlation of the variables is modified and consequently these new values may not be related to their group of origin, as shown in Fig 10b, 10e and 10f. Finally, the VKFCM-K-LP method with the OCS strategy showed better performance in identifying a priori classes, according to the accuracies observed in Fig 8a and 8b, so the set of values imputed using this strategy is closer to the set of observations from the original dataset shown in Fig 5a.
9 Conclusions
The problem of missing data is commonly discussed in several areas of science, as statistical techniques used for data analysis, such as clustering, were originally proposed for datasets without missing values. An alternative to face this issue is to adapt the clustering methods so that they can handle incomplete datasets. In this work, the VKFCM-K-LP clustering method was studied with three types of strategies to deal with missing data, WDS, PDS and OCS. In order to evaluate clustering methods in the context of missing data, two benchmark datasets were used: Iris Plant and Thyroid Gland.
From these datasets, new datasets with 5%, 10%, 15% and 20% of missing values were artificially generated. The results of the clustering algorithms were evaluated according to CR, FM and OERC. The results of the clustering for the Iris Plant dataset were satisfactory, with CR and FM close to 1 and the OERC measure close to zero, for all analyzed methods and percentages of missing values, which showed a good performance of the VKFCM-K-LP method under the WDS, PDS and OCS approaches in identifying a priori classes. For 5% of missing values the best performance of the VKFCM-K-LP clustering algorithm was observed with the PDS strategy. However, the performance graph for the 100 repetitions of the algorithm shows that for 10%, 15% and 20% of missing values, this method had the poorest performance. Additionally, the confusion matrices showed that observations belonging to Class 1 (setosa) in the Iris Plant dataset were properly grouped.
Regarding the weights of the variables in each group, variable PL was the most relevant, even with a higher percentage of missing values. The measures of consistency of the variables for the datasets obtained from the grouping with the VKFCM-K-LP algorithm, together with the OCS strategy, were close to zero, which showed a good clustering quality, that is, the values imputed using the OCS method were not discrepant in relation to the original scale of the variables.
In the generation of missing values for the Thyroid Gland dataset, variable T3 presented a greater amount of these values for 15% and 20% of missing values. The best quality measures for this dataset were observed in the PDS method. In addition, the methods showed an increasing average error rate when analyzing the performance graph on the 100 repetitions of the algorithm. The confusion matrices for the Thyroid Gland dataset showed an overlap between Classes 1 and 2 in all methods analyzed, which corresponded to a greater number of incorrectly grouped observations when compared with Class 3.
Variables TSH and DTSH obtained the highest weights in the construction of Cluster 2 in all analyzed cases. In contrast, variable T3 had little influence on the formation of the groups. The consistencies of the variables obtained for the OCS method in the Thyroid Gland dataset were close to zero, which means a good performance of the method in imputing the missing values.
When comparing the clustering results using the OCS and the Average and Median imputation methods, we have found that the best accuracy was observed for the OCS method in all considered percentages of missing values for both analyzed datasets. The results of the VKFCM-K-LP clustering using the imputation methods with the Mean and Median did not present satisfactory results, because the set of imputed values affected the general correlation of the variables in the dataset and there was a distortion in the variability of the data, which affected the quality of the clusters.
In general, the VKFCM-K-LP clustering algorithm together with the missing data strategies WDS, PDS and OCS presented satisfactory results in the datasets with 5% 10%, 15% and 20% of missing values. The best performances obtained by the grouping method were observed when paired with the PDS and OCS strategies. In the groups made with the OCS approach, new datasets were derived and the missing values were estimated in the optimization process. The results of the clustering with the OCS strategy showed superior performances when compared to the results obtained by imputing with the mean and median of the observed values.
Acknowledgments
The authors thank the editor and anonymous referees for comments and suggestions. The authors thank Dr. Telmo de Menezes e Silva Filho for proofreading and technical comments in the new version of this manuscript. The authors thank to CAPES and CNPq, Brazil.
Citation: Rodrigues AKG, Ospina R, Ferreira MRP (2021) Adaptive kernel fuzzy clustering for missing data. PLoS ONE 16(11): e0259266. https://doi.org/10.1371/journal.pone.0259266
1. Estivill-Castro V. Why so many clustering algorithms: a position paper. SIGKDD explorations. 2002;4(1):65–75.
2. Shen H, Yang J, Wang S, Liu X. Attribute weighted mercer kernel based fuzzy clustering algorithm for general non-spherical datasets. Soft Computing. 2006;10(11):1061–1073.
3. Jain AK, Murty MN, Flynn PJ. Data clustering: a review. ACM computing surveys (CSUR). 1999;31(3):264–323.
4. Xu R, Donald Wunsch I. Survey of Clustering Algorithms. IEEE TRANSACTIONS ON NEURAL NETWORKS. 2005;16(3):645. pmid:15940994
5. Filippone M, Camastra F, Masulli F, Rovetta S. A survey of kernel and spectral methods for clustering. Pattern recognition. 2008;41(1):176–190.
6. Ward J, Joe H. Hierarchical grouping to optimize an objective function. Journal of the American statistical association. 1963;58(301):236–244.
7. Zadeh LA. Fuzzy sets. Information and control. 1965;8(3):338–353.
8. Bezdek JC, Ehrlich R, Full W. FCM: The fuzzy c-means clustering algorithm. Computers & Geosciences. 1984;10(2-3):191–203.
9. Höppner F, Klawonn F, Kruse R, Runkler T. Fuzzy cluster analysis: methods for classification, data analysis and image recognition. John Wiley & Sons; 1999.
10. Evers FT, Höppner F, Klawonn F, Kruse R, Runkler T. Fuzzy cluster analysis: methods for classification, data analysis and image recognition. John Wiley & Sons; 1999.
11. Kumar PG, Victoire TAA, Renukadevi P, Devaraj D. Design of fuzzy expert system for microarray data classification using a novel genetic swarm algorithm. Expert Systems with Applications. 2012;39(2):1811–1821.
12. Bezdek JC. Pattern recognition with fuzzy objective function algorithms. Plenum, New York; 1981.
13. Ferreira MR, De Carvalho FDA. Kernel fuzzy c-means with automatic variable weighting. Fuzzy Sets and Systems. 2014;237:1–46.
14. Girolami M. Mercer kernel-based clustering in feature space. IEEE Transactions on Neural Networks. 2002;13(3):780–784. pmid:18244475
15. Camastra F, Vinciarelli A. Machine Learning for Audio, Image and Video Analysis. Springer Verlag; 2008. Available from: http://infoscience.epfl.ch/record/145949.
16. Zhang DQ, Chen SC. Kernel-based fuzzy and possibilistic c-means clustering. In: Proceedings of the International Conference Artificial Neural Network. vol. 122; 2003. p. 122–125.
17. Zhang DQ, Chen SC. A novel kernelized fuzzy c-means algorithm with application in medical image segmentation. Artificial intelligence in medicine. 2004;32(1):37–50. pmid:15350623
18. Inokuchi R, Miyamoto S. LVQ clustering and SOM using a kernel function. In: Proceedings of IEEE International Conference on Fuzzy Systems. vol. 3; 2004. p. 1497–1500.
19. Macdonald D, Fyfe C. The kernel self-organizing map. In: Fourth International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies. vol. 1; 2000. p. 317–320.
20. Qinand AK, Suganthan PN. Kernel neural gas algorithms with application to cluster analysis. In: ICPR—17th International Conference on Pattern Recognition (ICPR’04). vol. 4; 2004. p. 617–620.
21. Kim DW, Lee KY, Lee D, Lee KH. Evaluation of the performance of clustering algorithms in kernel-induced feature space. Pattern Recognition. 2005;38(4):607–611.
22. Kim DW, Lee KY, Lee D, Lee KH. A kernel-based subtractive clustering method. Pattern Recognition Letters. 2005;26:879–891.
23. Borer S, Gerstner W. A new kernel clustering algorithm. In: Proceedings of the 9th International Conference on Neural Information Processing. ICONIP’02. vol. 5; 2002. p. 2527–2531.
24. Camastra F, Verri A. A novel kernel method for clustering. IEEE transactions on pattern analysis and machine intelligence. 2005;27(5):801–805. pmid:15875800
25. Ding Y, Fu X. Kernel-based fuzzy c-means clustering algorithm based on genetic algorithm. Neurocomputing. 2016;188(Supplement C):233–238.
26. Xiang D, Tang T, Hu C, Li Y, Su Y. A Kernel Clustering Algorithm With Fuzzy Factor: Application to SAR Image Segmentation. IEEE Geosci Remote Sensing Lett. 2014;11(7):1290–1294.
27. Yang D, Fei R, Yao J, Gong M. Two-stage SAR image segmentation framework with an efficient union filter and multi-objective kernel clustering. Appl Soft Comput. 2016;44:30–44.
28. Haykin S. Neural networks: a comprehensive foundation. Prentice Hall PTR; 1994.
29. Mercer J. Xvi. functions of positive and negative type, and their connection the theory of integral equations. Philosophical transactions of the royal society of London Series A, containing papers of a mathematical or physical character. 1909;209(441-458):415–446.
30. Farhangfar A, Kurgan LA, Pedrycz W. A novel framework for imputation of missing values in databases. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans. 2007;37(5):692–709.
31. Lakshminarayan K, Harp SA, Samad T. Imputation of missing data in industrial databases. Applied intelligence. 1999;11(3):259–275.
32. Kurgan L, Cios KJ, Sontag M, Accurso FJ. Mining the cystic fibrosis data. Next generation of data-mining applications. 2005; p. 415–444.
33. Harel O, Zhou XH. Multiple imputation: review of theory, implementation and software. Statistics in medicine. 2007;26(16):3057–3077. pmid:17256804
34. Rubin DB. Inference and missing data. Biometrika. 1976;63(3):581–592.
35. Little RJ, Rubin DB. Statistical analysis with missing data. vol. 333. John Wiley & Sons; 2014.
36. Schafer JL. Multiple imputation: a primer. Statistical methods in medical research. 1999;8(1):3–15. pmid:10347857
37. Green PD, Barker J, Cooke M, Josifovski L. Handling Missing and Unreliable Information in Speech Recognition. In: AISTATS; 2001.
38. Sebestyen GS. Decision-making processes in pattern recognition. Macmillan; 1962.
39. Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Methodological). 1977;39(1):1–22.
40. Miyamoto S, Takata O, Unayahara K. Handling missing values in fuzzy c-means. In: Proceedings of the Korean Institute of Intelligent Systems Conference. Korean Institute of Intelligent Systems; 1998. p. 139–142.
41. Wagstaff K. Clustering with missing values: No imputation required. In: Classification, clustering, and data mining applications. Springer; 2004. p. 649–658.
42. Poddar S, Jacob M. Clustering of data with missing entries using non-convex fusion penalties. arXiv preprint arXiv:170901870. 2017;.
43. Hathaway RJ, Bezdek JC. Fuzzy c-means clustering of incomplete data. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics). 2001;31(5):735–744. pmid:18244838
44. Li D, Gu H, Zhang L. A fuzzy c-means clustering algorithm based on nearest-neighbor intervals for incomplete data. Expert Systems with Applications. 2010;37(10):6942–6947.
45. Li D, Zhong C, Li J. An attribute weighted fuzzy c-means algorithm for incomplete data sets. In: 2012 International Conference on System Science and Engineering (ICSSE). IEEE; 2012. p. 449–453.
46. Li T, Zhang L, Lu W, Hou H, Liu X, Pedrycz W, et al. Interval kernel fuzzy c-means clustering of incomplete data. Neurocomputing. 2017;237:316–331.
47. Hubert L, Arabie P. Comparing partitions. Journal of classification. 1985;2(1):193–218.
48. Baeza-Yates R, Ribeiro BdAN, et al. Modern information retrieval. New York: ACM Press; Harlow, England: Addison-Wesley,; 2011.
49. Breiman L, Friedman JH, Olshen RA, Stone CJ. Classification and regression trees. Wadsworth; 1984.
50. Lee L, Berger T, Aviczer E. Reliable online human signature verification systems. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1996;18(6):643–647.
51. Müller KR, Mika S, Rätsch G, Tsuda K, Schölkopf B. An introduction to kernel-based learning algorithms. IEEE transactions on neural networks. 2001;12(2). pmid:18244377
52. Schölkopf B, Smola A, Müller KR. Nonlinear component analysis as a kernel eigenvalue problem. Neural computation. 1998;10(5):1299–1319.
53. Scholkopf B, Smola AJ. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press; 2001.
54. Vapnik V. The nature of statistical learning theory. Springer science & business media; 2013.
55. Wu Zd, Xie Wx, Yu Jp. Fuzzy c-means clustering algorithm based on kernel method. In: Proceedings Fifth International Conference on Computational Intelligence and Multimedia Applications. ICCIMA 2003. IEEE; 2003. p. 49–54.
56. Graepel T, Obermayer K. Fuzzy topographic kernel clustering. In: Proceedings of the 5th GI Workshop Fuzzy Neuro Systems. vol. 98; 1998. p. 90–97.
57. Graves D, Pedrycz W. Kernel-based fuzzy clustering and fuzzy clustering: A comparative experimental study. Fuzzy sets and systems. 2010;161(4):522–543.
58. Chen DZS. Fuzzy clustering using kernel method. IEEE, Nanjing, China. 2002;.
59. Diday E. Classification automatique avec distances adaptatives. RAIRO Informatique Computer Science. 1977;11(4):329–349.
60. Gustafson DE, Kessel WC. Fuzzy clustering with a fuzzy covariance matrix. In: 1978 IEEE conference on decision and control including the 17th symposium on adaptive processes. IEEE; 1979. p. 761–766.
61. Ferreira MR, de Carvalho FdA, Simões EC. Kernel-based hard clustering methods with kernelization of the metric and automatic weighting of the variables. Pattern Recognition. 2016;51:310–321.
62. Baraldi AN, Enders CK. An introduction to modern missing data analyses. Journal of school psychology. 2010;48(1):5–37. pmid:20006986
63. Rubin DB. Multiple imputation for nonresponse in surveys. vol. 81. John Wiley & Sons; 2004.
64. Dixon JK. Pattern recognition with partly missing data. IEEE Transactions on Systems, Man, and Cybernetics. 1979;9(10):617–621.
65. R Core Team. R: A Language and Environment for Statistical Computing; 2020. Available from: http://www.R-project.org/.
66. Anderson E. The irises of the Gaspe Peninsula. Bulletin of the American Iris society. 1935;59:2–5.
67. Quinlan JR. Induction of decision trees. Machine learning. 1986;1(1):81–106.
68. Bache K, Lichman M. UCI machine learning repository; 2013.
69. Ding C, He X. K-means clustering via principal component analysis. In: Proceedings of the twenty-first international conference on Machine learning. ACM; 2004. p. 29.
70. Tabachnick BG, Fidell LS, Ullman JB. Using multivariate statistics. vol. 5. Pearson Boston, MA; 2007.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021 Rodrigues et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Many machine learning procedures, including clustering analysis are often affected by missing values. This work aims to propose and evaluate a Kernel Fuzzy C-means clustering algorithm considering the kernelization of the metric with local adaptive distances (VKFCM-K-LP) under three types of strategies to deal with missing data. The first strategy, called Whole Data Strategy (WDS), performs clustering only on the complete part of the dataset, i.e. it discards all instances with missing data. The second approach uses the Partial Distance Strategy (PDS), in which partial distances are computed among all available resources and then re-scaled by the reciprocal of the proportion of observed values. The third technique, called Optimal Completion Strategy (OCS), computes missing values iteratively as auxiliary variables in the optimization of a suitable objective function. The clustering results were evaluated according to different metrics. The best performance of the clustering algorithm was achieved under the PDS and OCS strategies. Under the OCS approach, new datasets were derive and the missing values were estimated dynamically in the optimization process. The results of clustering under the OCS strategy also presented a superior performance when compared to the resulting clusters obtained by applying the VKFCM-K-LP algorithm on a version where missing values are previously imputed by the mean or the median of the observed values.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer