(ProQuest: ... denotes non-US-ASCII text omitted.)
Tian Yang 1 and Zhaowen Li 2 and Xiaoqing Yang 3
Recommended by Chong Lin
1, College of Science, Central South University of Forestry and Technology, Changsha 410004, China
2, College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning 530006, China
3, School of Economics and Management, Changsha University of Science and Technology, Changsha 410004, China
Received 31 March 2012; Revised 12 July 2012; Accepted 16 July 2012
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
With the development of technology, the gross of information increases in a surprising way. It is a great challenge to extract valuable knowledge from the massive information. Rough set theory was raised by Pawlak [1, 2] to deal with uncertainty and vagueness, and it has been applied to the information processing in various areas [3-8].
One of the most important topics in rough set theory is to design reduction algorithms. The reduction of Pawlak's rough sets is to reduce dispensable elements from a family of equivalence relations which induce the equivalence classes, or a partition.
Covering generalized rough set [9-19] and binary relation generalized rough set [20-26] are two main extensions of Pawlak's rough set. The reduction theory of covering rough sets [10, 11, 15, 23, 27, 28] plays an important role in practice. A partition is no longer a partition if any of its elements is deleted, while a covering may still be a covering with invariant set approximations after dropping some elements. Therefore, there are two types of reduction on covering rough sets. One is to reduce redundant coverings from a family of coverings, referred to as the attribute reduction. The other is to reduce redundant elements from a covering, noted as the granular reduction. It is to find the minimal subsets of a covering which generate the same set approximations with the original covering. Employed to reduce granular structures and databases as well as interactive with the attribute reduction, we think the granular reduction should be ignored by no means. In this paper, we devote to investigate granular reduction of covering rough sets.
In order to compute all attribute reducts for Pawlak's rough sets, discernibility matrix is initially presented [29]. Tsang et al. [15] develop an algorithm of discernibility matrices to compute attribute reducts for one type of covering rough sets. Zhu and Wang [17] and Zhu [18] build one type of granular reduction for two covering rough set models initially. In addition, Yang et al. systematically examine the granular reduction in [30] and the relationship between reducts and topology in [31]. Unfortunately, no effective algorithm for granular reduction has hitherto been proposed.
In this paper, we bridge the gap by constructing an algorithm based on discernibility matrixes which is applicable to all granular reducts of covering rough sets. This algorithm can reduce granular structures and get rid of the redundant information from information systems. Then a discernibility matrix is simplified to the minimal format. Meanwhile, based on a simplification of discernibility matrix, a heuristic algorithm is proposed as well.
The remainder of this paper proceeds as follows. Section 2 reviews the relevant background knowledge about the granular reduction. Section 3 constructs the algorithm based on discernibility matrix. Section 4 simplifies the discernibility matrix and proposes a heuristic algorithm. Section 5 concludes the study.
2. Background
Our aim in this section is to give a glimpse of rough set theory.
Let U be a finite and nonempty set, and let R be an equivalence relation on U . R generates a partition U/R={[x]R |"x∈X} on U , where [x]R is an equivalence class of x generated by the equivalence relation R . We call it elementary sets of R in rough set theory. For any set X , we describe X by the elementary sets of R , and the two sets [figure omitted; refer to PDF] are called the lower and upper approximations of X , respectively. If R* (X)=R* (X),X is an R -exact set. Otherwise, it is an R -rough set.
Let ... be a family of equivalence relations, and let A∈... , denoted as IND(...)=∩{R:R∈...} . A is dispensable in ... if and only if IND(...)=IND(...-A) . Otherwise, A is indispensable in ... . The family ... is independent if every A∈... is indispensable in ... . Otherwise, ... is dependent. ...∈... is a reduct of ... if ... is independent and IND(...)=IND(...) . The sets of all indispensable relations in ... are called the core of ... , denoted as CORE( ... ). Evidently, CORE (...)=∩ RED (...) , where RED (...) is the family of all reducts of ... . The discernibility matrix method is proposed to compute all reducts of information systems and relative reducts of decision systems [29].
...9E; is called a covering of U , where U is a nonempty domain of discourse, and ...9E; is a family of nonempty subsets of U and ∪...9E;=U .
It is clear that a partition of U is certainly a covering of U , so the concept of a covering is an extension of the concept of a partition.
Definition 2.1 (minimal description [9]).
Let ...9E; be a covering of U , [figure omitted; refer to PDF] is called the minimal description of x . When there is no confusion, we omit the ...9E; from the subscript.
Definition 2.2 (neighborhood [9, 19]).
Let ...9E; be a covering of U , and N...9E; (x)=∩{C∈...9E;|"x∈C} is called the neighborhood of x . Generally, we omit the subscript ...9E; when there is no confusion.
Minimal description and neighborhood are regarded as related information granules to describe x , which are used as approximation elements in rough sets (as shown in Definition 2.3). It shows that N(x)=∩{C∈...9E;|"x∈C}=∩Md(x) . The neighborhood of x can be seen as the minimum description of x , and it is the most precise description (more details are referred to [9]).
Definition 2.3 (covering lower and upper approximation operations [19]).
Let ...9E; be a covering of U . The operations CL...9E; :P(U)[arrow right]P(U) and CL[variant prime]...9E; :P(U)[arrow right]P(U) are defined as follows: for all X∈P(U) , [figure omitted; refer to PDF] We call CL...9E; the first, the second, the third, or the fourth covering lower approximation operations and CL[variant prime]...9E; the fifth, the sixth, or the seventh covering lower approximation operations, with respect to the covering ...9E; .
The operations FH , SH , TH , RH , IH , XH , and VH:P(U)[arrow right]P(U) are defined as follows: for all X∈P(U) , [figure omitted; refer to PDF] FH...9E; , SH...9E; , TH...9E; , RH...9E; , IH...9E; , XH...9E; , and VH...9E; are called the first, the second, the third, the fourth, the fifth, the sixth, and the seventh covering upper approximation operations with respect to ...9E; , respectively. We leave out ...9E; at the subscript when there is no confusion.
As shown in [32], every approximation operation in Definition 2.3 may be applied in certain circumstance. We choose the suitable approximation operation according to the specific situation. So it is important to design the granular reduction algorithms for all of these models.
More precise approximation spaces are proposed in [30]. As a further result, a reasonable granular reduction of coverings is also introduced. Let [physics M-matrix]...9E; =∪{Md(x)|"x∈U} , ...A9;...9E; ={N(x)|"x∈U} . Y9;U,[physics M-matrix]...9E; YA; is the approximation space of the first and the third types of covering rough sets, Y9;U,...9E;YA; is the approximation space of the second and the fourth types of covering rough sets, and Y9;U,...A9;...9E; YA; is the approximation space of the fifth, the sixth, and the seventh types of covering rough sets (referred to [30] for the details). In this paper, we design the algorithm of granular reduction for the fifth, the sixth, and the seventh type of covering rough sets.
Let ...9E; be a covering of U , denoting a covering approximation space. [physics M-matrix]...9E; denotes an [physics M-matrix] -approximation space. ...A9;...9E; represents an ...A9; -approximation space. We omit ...9E; at the subscript when there is no confusion (referred to [30] for the details).
3. Discernibility Matrixes Based on Covering Granular Reduction
In the original Pawlak's rough sets, a family of equivalence classes induced by equivalence relations is a partition. Once any of its elements are deleted, a partition is no longer a partition. The granular reduction refers to the method of reducing granular structures and to get rid of redundant information in databases. Therefore, granular reduction is not applicable to the original Pawlak's rough sets. However, as one of the most extensions of Pawlak's rough sets, a covering is still working even subject to the omission of its elements, as long as the set approximations are invariant. The purpose of covering granular reduction is to find minimal subsets keeping the same set approximations. It is meaningful and necessary to develop the algorithm for covering granular reduction.
The quintuple ( U,...9E;,CL,CH ) is called a covering rough set system (CRSS), where ...9E; is a covering of U , CL and CH are the lower and upper approximation operations with respect to the covering ...9E; , and Y9;U,...9C;...9E; YA; is the approximation space. According to the categories of covering approximation operations in [30], there are two kinds of situations as follows.
(1) If ...9C;...9E; =...9E; or ...9C;...9E; =[physics M-matrix]...9E; , then ...9C;...9E; ⊆...9E; : thus; ...9C;...9E; is the unique granular reduct of ...9E; . There is no need to develop an algorithm to compute granular reducts for the first, the second, the third, and the fourth type of the covering rough sets.
(2) If ...9C;...9E; =...A9;...9E; , generally, ...9C;...9E; is not a subset of ...9E; . Consequently, an algorithm is needed to compute all granular reducts of ...9E; for the fifth, the sixth, and the seventh type of covering rough set models.
Next we examine the algorithm of granular reduction for the fifth, the sixth, and the seventh type of covering rough sets. Let ...9E; be a covering of U , since ...A9;...9E; ={N(x)|"x∈U} , and ...A9;...9E; is the collection of all approximation elements of the fifth, the sixth, or the seventh type of lower/upper approximation operations. ...A9;...9E; is called the ...A9; -approximation space of ...9E; . Given a pair of approximation operations, the set approximations of any X⊆U are determined by the ...A9; -approximation spaces. Thus, for the fifth, the sixth, and the seventh type of covering rough set models, the purpose of granular reduction is to find the minimal subsets ...9E;[variant prime] of ...9E; such that ...A9;...9E; =...A9;...9E;[variant prime] . The granular reducts based on the ...A9; -approximation spaces are called the ...A9; -reducts. Nred(...9E;) is the set of all ...A9; -reducts of ...9E; , and NI(...9E;) is the set of all ...A9; -irreducible elements of ...9E; (referred to [30] for the details).
In Pawlak's rough set theory, for every pair of x,y∈U , if y belongs to the equivalence class containing x , we say that x and y are indiscernible. Otherwise, they are discernible. Let ...={R1 ,R2 ,...,Rn } be a family of equivalence relation on U , Ri ∈... . Ri is indispensable in ... if and only if there is a pair of x,y∈U such that the relation between x and y is altered after deleting Ri from ... . The attribute reduction of Pawlak's rough sets is to find minimal subsets of ... which keep the relations invariant for any x, y∈U . Based on this statement, the method of discernibility matrix to compute all reducts of Pawlak's rough sets was proposed in [29]. In covering rough sets, however, the discernibility relation between x, y∈U is different from that in Pawlak's rough sets.
Let ...9E; be a covering on U , (x,y)∈U×U . Then we call (x,y) indiscernible if y∈N(x) , that is, N(y)⊆N(x) . Otherwise, (x,y) is discernible. When ...9E; is a partition, the new discernibility relation coincides with that in Pawlak's. It is an extension of Pawlak's discernibility relation. In Pawlak's rough sets, (x,y) is indiscernible if and only if (y,x) is indiscernible. However, for a general covering, if N(y)⊆N(x) and N(y)...0;N(x) , that is, y∈N(x) and x∉N(y) , (y,x) is discernible while (x,y) is indiscernible. Thereafter, we call these relations the relations of (x,y) with respect to ...9E; . The following theorem characterizes these relations.
Proposition 3.1.
Let ...9E;={Ci |"i=1,2,3,...,n} be a covering on U , and let ...9E;x ={Ci ∈...9E;|"x∈Ci } .
(1) y∈N(x) if and only if ...9E;x ⊆...9E;y .
(2) y∉N(x) if and only if there is Ci ∈...9E; such that x∈Ci and y∉Ci .
Proof.
(1) y∈N(x)=∩...9E;x ... for any Ci ∈...9E;x , y∈Ci ... for any Ci ∈...9E;x , Ci ∈...9E;y ... ...9E;x ⊆...9E;y .
(2) It is evident from (1) .
Theorem 3.2.
Let ...9E; be a covering on U , Ci ∈...9E; . Then ...A9;...9E; ...0;...A9;...9E;-{Ci } if and only if there is (x,y)∈U×U whose discernibility relation with respect to ...9E; is changed after deleting Ci from ...9E; .
Proof.
Suppose that ...A9;...9E; ...0;...A9;...9E;-{Ci } , then there is at least one element x∈U such that N...9E; (x)...0;N...9E;-{Ci } (x) , that is, N...9E; (x)⊂N...9E;-{Ci } (x) . Since N...9E;-{Ci } (x)-N...9E; (x)...0;... , suppose that y∈N...9E;-{Ci } (x)-N...9E; (x) , then y∈N...9E;-{Ci } (x) and y∉N...9E; (x) . Namely, (x,y) is discernible with respect to ...9E; , while (x,y) is indiscernible with respect to ...9E;-{Ci } .
Suppose that there is (x,y)∈U×U whose discernibility relation with respect to ...9E; is changed after deleting Ci from ...9E; . Put differently, (x,y) is discernible with respect to ...9E; , while (x,y) is indiscernible with respect to ...9E;-{Ci } . Then we have y∈N...9E;-{Ci } (x) and y∉N...9E; (x) , so y∈N...9E;-{Ci } (x)-N...9E; (x) . Thus, N...9E; (x)...0;N...9E;-{Ci } (x) . It implies ...A9;...9E; ...0;...A9;...9E;-{Ci } .
The purpose of granular reducts of a covering ...9E; is to find the minimal subsets of ...9E; which keep the same classification ability as ...9E; or, put differently, keep ...A9;...9E; invariant. In Theorem 3.2, ...A9;...9E; is kept unchanged to make the discernibility relations of any (x,y)∈U×U invariant. Based on this statement, we are able to compute granular reducts with discernibility matrix.
Definition 3.3.
Let U={x1 ,x2 ,...,xn } , ...9E; be a covering on U . M(U,...9E;) is an n×n matrix (cij )n×n called a discernibility matrix of (U,...9E;) , where
(1) cij =... , xj ∈N(xi ) ,
(2) cij ={C∈...9E;|"xi ∈C, xj ∉C} , xj ∉N(xi ) .
This definition of discernibility matrix is more concise than the one in [11, 15] due to the reasonable statement of the discernibility relations. Likewise, we restate the characterizations of ...A9; -reduction.
Proposition 3.4.
Consider that NI(...9E;)={C|"cij ={C} for some cij ∈M(U,...9E;)} .
Proof.
For any C∈NI(...9E;) , ...A9;...9E; ...0;...A9;...9E;-{C} , then there is (xi ,xj )∈U×U such that xj ∈N...9E;-{C} (xi ) and xj ∉N...9E; (xi ) . It implies that xi ∈C and xj ∉C . Moreover, for any C[variant prime]∈...9E;-{C} , since xj ∈N...9E;-{C} (xi ) , we have xi ∈C[variant prime] if xi ∈C[variant prime] . Thus, cij ={C} .
If cij ={C} for some cij ∈M(U,...9E;) , then xi ∈C and xj ∉C . And for any C[variant prime]∈...9E;-{C} , if xi ∈C[variant prime] , then xi ∈C[variant prime] , that is, xj ∈N...9E;-{C} (xi ) and xj ∉N...9E; (xi ) , then N...9E;-{C} (xi )...0;N...9E; (xi ) . Namely, ...A9;...9E; ...0;...A9;...9E;-{C} , which implies C∈NI(...9E;) .
Proposition 3.5.
Suppose that ...9E;[variant prime]⊆...9E; , then ...A9;...9E; =...A9;...9E;[variant prime] if and only if ...9E;[variant prime]∩cij ...0;... for every cij ...0;... .
Proof.
...A9;...9E; =...A9;...9E;[variant prime]
: ... for any (xi ,xj )∈U×U , xj ∉N...9E; (xi ) if and only if xj ∉N...9E;[variant prime] (xi ) ,
: ... for any (xi ,xj )∈U×U , there is C∈...9E; such that xi ∈C and xj ∉C if and only if there is C[variant prime]∈...9E;[variant prime] such that xi ∈C[variant prime] and xj ∉C[variant prime] ,
: ... for any cij ...0;... , ...9E;[variant prime]...0;... .
Proposition 3.6.
Suppose that ...9E;[variant prime]⊆...9E; , then ...9E;[variant prime] ∈N red (...9E;) if and only if ...9E;[variant prime] is a minimal set satisfying ...9E;[variant prime]∩cij ...0;... for every cij ...0;... .
Definition 3.7.
Let U={x1 ,x2 ,...,xn } , let ...9E;={C1 ,C2 ,...,Cm } be a covering of U , and let M(U,...9E;)=(cij )n×n be the discernibility matrix of (U,...9E;) . A discernibility function f(U,...9E;) is a Boolean function of m Boolean variables, C1 ¯,C2 ¯,...,Cm ¯ , corresponding to the covering elements C1 ,C2 ,...,Cm , respectively, defined as f(U,...9E;)(C1 ¯,C2 ¯,...,Cm ¯)=⋀{⋁(cij )|"cij ∈M(U,...9E;),cij ...0;...} .
Theorem 3.8.
Let ...9E; be a family of covering on U , let f(U,...9E;) be the discernibility function, and let g(U,...9E;) be the reduced disjunctive form of f(U,...9E;) by applying the multiplication and absorption laws. If g(U,...9E;)=(⋀...9E;1 )⋁...⋁(⋀...9E;l ) , where ...9E;k ⊆...9E; , k=1,2,...,l and every element in ...9E;k only appears once, then N red (...9E;)={...9E;1 ,...9E;2 ,...,...9E;l } .
Proof.
For every k=1,2,...,l , ⋀...9E;k ...4;⋁cij for any cij ∈M(U,...9E;) , so ...9E;k ∩cij ...0;... . Let ...9E;k[variant prime] =...9E;k -{C} for any C∈...9E;k , then g(U,...9E;)...8;⋁t=1k-1 (⋀...9E;t )⋁(⋀...9E;k[variant prime] )⋁(⋁t=k+1l (⋀...9E;t )) . If for every cij ∈M(U,...9E;) , we have ...9E;k[variant prime] ∩cij ...0;... , then ⋀...9E;k[variant prime] ...4;⋁cij for every cij ∈M(U,...9E;) , that is, g(U,...9E;)...5;⋁t=1k-1 (⋀...9E;t )⋁(⋀...9E;k[variant prime] )⋁(⋁t=k+1l (⋀...9E;t )) , which is a contradiction. It implies that there is ci0j0 ∈M(U,...9E;) such that ...9E;k[variant prime] ∩ci0j0 =... . Thus, ...9E;k is a reduct of ...9E; .
For any ...9E;[variant prime]∈Red(...9E;) , we have ...9E;[variant prime]∩cij ...0;... for every cij ∈M(U,...9E;) , so f(U,...9E;)⋀(⋀...9E;[variant prime])=⋀(⋁cij )⋀(⋀...9E;[variant prime])=⋀...9E;[variant prime] , which implies ⋀...9E;[variant prime]...4;f(U,...9E;)=g(U,...9E;) . Suppose that, for every k=1,2,...,l , we have ...9E;k -...9E;[variant prime]...0;... , then for every k , there is ...9E;k ∈...9E;k -...9E;[variant prime] . By rewriting g(U,...9E;)=(⋁k=1l...9E;k )⋀Φ , ⋀...9E;[variant prime]...4;⋁k=1l...9E;k . Thus, there is Ck0 such that ⋀...9E;[variant prime]...4;...9E;k0 , that is, ...9E;k0 ∈...9E;[variant prime] , which is a contradiction. So ...9E;k0 ⊆...9E;[variant prime] for some k0 , since both ...9E;[variant prime] and ...9E;k0 are reducts, and it is evident that ...9E;[variant prime]=...9E;k0 . Consequently, Red(...9E;)={...9E;1 ,...9E;2 ,...,...9E;l } .
Algorithm 3.9.
Consider the following:
input : Y9;U,...9E;YA; ,
output : Nred(...9E;) and NI(...9E;) // The set of all granular reducts and the set of all ...A9; -irreducible elements.
: Step 1: M(U,...9E;) = (cij )n×n , for each cij , let cij =... .
: Step 2: for each xi ∈U , compute N(xi )=∩{C∈...9E;|"xi ∈C} .
: If xj ∉N(xi ),cij ={C∈...9E;|"xi ∈C,xj ∉C} .
: Step 3: f(U,...9E;)(C1 ¯,C2 ¯,...,Cm ¯)=⋀{⋁(cij )|"cij ∈M(U,...9E;),cij ...0;...} .
: Step 4: compute f(U,...9E;) to g(U,...9E;)=(⋀...9E;1 )⋁...⋁(⋀...9E;l ) // where ...9E;k ⊆...9E; ,
: k=1,2,...,l , and every element in ...9E;k only appears once.
: Step 4: output Nred(...9E;)={...9E;1 ,...9E;2 ,...,...9E;l } , NI(...9E;)=∩Nred(...9E;) .
: Step 5: end.
The following example is used to illustrate our idea.
Example 3.10.
Suppose that U={x1 ,x2 ,...,x6 } , where xi , i=1,2,...,6 denote six objects, and let Ci , i=1,2,...,7 denote seven properties; the information is presented in Table 1, that is, the i th object possesses the j th attribute is indicated by a * in the ij -position of the table.
Table 1
Objects | C1 | C2 | C3 | C4 | C5 | C6 | C7 |
x1 | * | * |
| * | * |
| * |
x2 | * |
|
|
|
|
|
|
x3 | * |
| * |
|
| * |
|
x4 |
| * | * | * | * | * | * |
x5 |
|
| * | * |
|
| * |
x6 |
|
|
|
| * |
| * |
{x1 ,x2 ,x3 } is the set of all objects possessing the attribute C1 , and it is denoted by C1 ={x1 ,x2 ,x3 } . Similarly, C2 ={x1 ,x4 } , C3 ={x3 ,x4 ,x5 } , C4 ={x1 ,x4 ,x5 } , C5 ={x1 ,x4 ,x6 } , C6 ={x3 ,x4 } , and C7 ={x1 ,x4 ,x5 ,x6 } . Evidently, ...9E;={C1 ,C2 ,C3 ,C4 ,C5 ,C6 ,C7 } is a covering on U .
Then, N(x1 )={x1 } , N(x2 )={x1 ,x2 ,x3 } , N(x3 )={x3 } , N(x4 )={x4 } , N(x5 )={x4 ,x5 } , and N(x6 )={x4 ,x6 } .
The discernibility matrix of (U,...9E;) is exhibited as follows: [figure omitted; refer to PDF]
So Nred(...9E;)={{C1 ,C3 ,C4 ,C5 },{C1 ,C3 ,C5 ,C7 }} , NI(...9E;)={C1 ,C3 ,C5 } . As a result, Table 1 can be simplified into Table 2 or Table 3, and the ability of classification is invariant. Obviously, the granular reduction algorithm can reduce data sets as shown.
Table 2
Objects | C1 | C3 | C4 | C5 |
x1 | * |
| * | * |
x2 | * |
|
|
|
x3 | * | * |
|
|
x4 |
| * | * | * |
x5 |
| * | * |
|
x6 |
|
|
| * |
Table 3
Objects | C1 | C3 | C5 | C7 |
x1 | * |
| * | * |
x2 | * |
|
|
|
x3 | * | * |
|
|
x4 |
| * | * | * |
x5 |
| * |
| * |
x6 |
|
| * | * |
4. The Simplification of Discernibility Matrixes
For the purpose of finding the set of all granular reducts, we have proposed the method by discernibility matrix. Unfortunately, it is at least an NP problem, since the discernibility matrix in this paper is more complex than the one in [33]. Accordingly, we simplify the discernibility matrixes in this section. In addition, a heuristic algorithm is presented to avoid the NP hard problem.
Definition 4.1.
Let M(U,...9E;)=(cij )n×n be the discernibility matrix of (U,...9E;) . For any cij ∈M(U,...9E;) , if there is an nonempty element ci0j0 ∈M(U,...9E;)-{cij } such that ci0j0 ⊆cij , let cij[variant prime] =... ; otherwise, cij[variant prime] =cij , then we get a new discernibility matrix SIM(U,...9E;)=(cij[variant prime] )n×n , which called the simplification discernibility matrix of (U,...9E;) .
Theorem 4.2.
Let M(U,...9E;) be the discernibility matrix of (U,...9E;) , and SIM (U,...9E;) is the simplification discernibility matrix, ...9E;[variant prime]⊆...9E; . Then ...9E;[variant prime]∩cij ...0;... for any nonempty element cij ∈M(U,...9E;) if and only if ...9E;[variant prime]∩cij[variant prime] ...0;... for any nonempty element cij[variant prime] ∈ SIM (U,...9E;) .
Proof.
If ...9E;[variant prime]∩cij ...0;... for every cij ...0;... and cij ∈M(U,...9E;) , it is evident that ...9E;[variant prime]∩cij[variant prime] ...0;... for every cij[variant prime] ...0;... and cij[variant prime] ∈SIM(U,...9E;) .
Suppose that ...9E;[variant prime]∩cij[variant prime] ...0;... for every cij[variant prime] ...0;... and cij[variant prime] ∈SIM(U,...9E;) . For any nonempty cij ∈M(U,...9E;) , if there is an nonempty element ci0j0 ∈M(U,...9E;)-{cij } such that ci0j0 ⊆cij , and for any nonempty element ci1j1 ∈M(U,...9E;)-{cij ,ci0j0 } , ci1j1 [neither a subset of nor =]ci0j0 , then ci0j0[variant prime] =ci0j0 ...0;... . Since ...9E;[variant prime]∩ci0j0[variant prime] ...0;... , then ...9E;[variant prime]∩ci0j0 ...0;... ; thus, ...9E;[variant prime]∩cij ...0;... . If ci0j0 [neither a subset of nor =]cij for any nonempty element ci0j0 ∈M(U,...9E;)-{cij } , then cij[variant prime] =cij . Since ...9E;[variant prime]∩cij[variant prime] ...0;... , then ...9E;[variant prime]∩cij ...0;... . Thus, ...9E;[variant prime]∩cij ...0;... for every nonempty cij ∈M(U,...9E;) .
Proposition 4.3.
Suppose that ...9E;[variant prime]⊆...9E; , then ...9E;[variant prime] ∈N red (...9E;) if and only if ...9E;[variant prime] is a minimal set satisfying ...9E;[variant prime]∩cij[variant prime] ...0;... for every cij[variant prime] ...0;... cij[variant prime] ∈ < SIM (U,...9E;) .
Proposition 4.4.
Consider that ∪{cij[variant prime] |"cij[variant prime] ∈ SIM (U,...9E;)}=∪N red (...9E;) .
Proof.
Suppose that C∈∪{cij[variant prime] |"cij[variant prime] ∈SIM(U,...9E;)} , then there is cij[variant prime] ∈SIM(U,...9E;) such that C∈cij[variant prime] and cij[variant prime] ∩NI(...9E;)=... . For any cij[variant prime] ∈SIM(U,...9E;) , if C∈cij[variant prime] , let cij1 ={C} . Otherwise, cij1 ={Cij } , where Cij ∈cij[variant prime] . Suppose that M1 (U,...9E;)=(cij1 )n×n ; it is easy to prove that C∈∪{cij1 |"cij1 ∈M1 (U,...9E;)}∈Nred(...9E;) . Thus, C∈∪Nred(...9E;) .
Suppose that C∈∪Nred(...9E;) , then there is ...9E;k ∈Nred(...9E;) such that C∈...9E;k . From Proposition 4.3, we know that ...9E;k is a minimal set satisfying ...9E;k ∩c[variant prime]ij ...0;... for every cij[variant prime] ...0;... and cij[variant prime] ∈SIM(U,...9E;) . So there is a cij[variant prime] ∈SIM(U,...9E;) such that C∈cij[variant prime] , or else C is redundant in ...9E;k . Thus, C∈∪{cij[variant prime] |"cij[variant prime] ∈SIM(U,...9E;)} .
In summary, ∪{cij[variant prime] |"cij[variant prime] ∈SIM(U,...9E;)}=∪Nred(...9E;) .
Proposition 4.5.
Let SIM (U,...9E;)=(cij[variant prime] )n×n be the simplified discernibility matrix of (U,...9E;) , then SIM (U,...9E;) is the minimal matrix to compute all granular reducts of ...9E; , that is, for any matrix M0 (U,...9E;)=(dij )n×n where dij ⊆cij[variant prime] , M0 (U,...9E;) can compute all granular reducts of ...9E; if and only if dij =cij[variant prime] for 1...4;i,j...4;n .
Proof.
If dij =cij[variant prime] for 1...4;i,j...4;n , then M0 (U,...9E;)=SIM(U,...9E;) , and M0 (U,...9E;) can compute all granular reducts of ...9E; .
Suppose that there is a nonempty ci0j0[variant prime] ∈SIM(U,...9E;) such that di0j0 ⊂ci0j0[variant prime] . If |ci0j0[variant prime] |=1 , suppose that ci0j0[variant prime] ={C0 } , then di0j0 =... . From the definition of the simplification discernibility matrix, we know that C0 ∉cij[variant prime] for any cij[variant prime] ∈SIM(U,...9E;)-{ci0j0[variant prime] } , then C0 ∉dij for any dij ∈M0 (U,...9E;) . So M0 (U,...9E;) cannot compute any granular reducts of ...9E; . If |ci0j0[variant prime] |...5;2 , we suppose that di0j0 ...0;... . Then there is a C∈(ci0j0[variant prime] -di0j0 ) , and let ci0j01 ={C} . For any cij[variant prime] ∈SIM(U,...9E;)-{ci0j0[variant prime] } , if C∈cij[variant prime] , let cij1 =... . Otherwise, let cij1 ={Cij } where Cij ∈cij[variant prime] -ci0j0[variant prime] . Let M1 (U,...9E;)=(cij1 )n×n and ...9E;[variant prime]=∪{cij1 |"cij1 ∈M1 (U,...9E;)} , and it is easy to prove that ...9E;[variant prime]∈Nred(...9E;) . However, ...9E;[variant prime]∩di0j0 =... , that is, M0 (U,...9E;) cannot compute all granular reducts of ...9E; . Thus, if M0 (U,...9E;) can compute all granular reducts of ...9E; , then dij =cij[variant prime] for 1...4;i,j...4;n .
From the above propositions, we know that the simplified discernibility matrix is the minimal discernibility matrix which can compute the same reducts as the original one. Hereafter, we only examine simplified discernibility matrixes instead of general discernibility matrixes. The following example is used to illustrate our idea.
Example 4.6.
The discernibility matrix of (U,...9E;) in Example 3.10 is as follows: [figure omitted; refer to PDF] So Nred(...9E;)={{C1 ,C3 ,C4 ,C5 },{C1 ,C3 ,C5 ,C7 }} , NI(...9E;)={C1 ,C3 ,C5 } .
From the above example, it is easy to see that simplified discernibility matrix can simplify the computing processes remarkably. Especially when ...9E; is a consistent covering proposed in [30], that is, Nred(...9E;)={NI(...9E;)} , the unique reduct Nred(...9E;)={∪{cij[variant prime] |"cij[variant prime] ∈SIM(U,...9E;)}} .
Unfortunately, although the simplified discernibility matrixes are more simple, the processes of computing reducts by discernibility function are still NP hard. Accordingly, we develop a heuristic algorithm to obtain a reduct from a discernibility matrix directly.
Let M(U,...9E;)=(cij )n×n be a discernibility matrix. We denote the number of the elements in cij by |cij | . For any C∈...9E; , ||C|| denotes the number of cij which contain C . Let cij ∈M(U,...9E;) , if for any C∈NI(...9E;) , C∉cij , then cij[variant prime] =cij . Since ∪{cij[variant prime] |"|cij[variant prime] |...5;2}=∪Nred(...9E;)-NI(...9E;) , if |cij[variant prime] |...5;2 , then the elements in cij[variant prime] may either be deleted from ...9E; or be preserved. Suppose that C0 ∈∪{cij[variant prime] |"|cij[variant prime] |...5;2} , if ||C0 ||...5;||C|| for any C∈∪{cij[variant prime] |"|cij[variant prime] |...5;2} , C0 is called the maximal element with respect to the simplified discernibility matrix SIM(U,...9E;) . The heuristic algorithm to get a reduct from a discernibility matrix directly proceeds as follows.
Algorithm 4.7.
Consider the following:
input : Y9;U,...9E;YA; ,
output : granular reducts red
: Step 1: M(U,...9E;) = (cij )n×n , for each cij , let cij =... .
: Step 2: for each xi ∈U , compute N(xi )=∩{C∈...9E;xi ∈C} .
: If xj ∉N(xi ),
: cij ={C∈...9E;|"xi ∈C,xj ∉C}//get the discernibility matrix .
: Step 3: for each cij ∈M(U,...9E;) ,
: if there is a nonempty element ci0j0 ∈M(U,...9E;)-{cij } such that
: ci0j0 ⊆cij , let cij =... // get the simplified discernibility matrix.
: Step 4: for each Ci ∈∪M(U,...9E;) , compute ||Ci || and select the maximal
: element C0 of SIM(U,...9E;) .
: For each cij ∈M(U,...9E;) ,
: if C0 ∈cij ,
: let cij ={C0 } .
: Step 5: if there is cij ∈M(U,...9E;) such that |cij |[= or >, slanted]2 ,
: return to Step 3;
: else
: output red=∪M(U,...9E;) .
: Step 5: end.
Example 4.8.
The simplified discernibility matrix of (U,...9E;) in Example 3.10 is as follows: [figure omitted; refer to PDF]
For a maximal element C4 of SIM(U,...9E;) , let c531 ={C4 } , then we get M1 (U,...9E;) as follows: [figure omitted; refer to PDF]
Thus, {C1 ,C3 ,C4 ,C5 }=∪{cij1 |"cij1 ∈M1 (U,...9E;)} is a granular reduct of ...9E; .
For a maximal element C7 of SIM(U,...9E;) , let c531 ={C7 } , then we get M2 (U,...9E;) as follows: [figure omitted; refer to PDF] Thus, {C1 ,C3 ,C5 ,C7 }=∪{cij2 |"cij2 ∈M2 (U,...9E;)} is also a granular reduct of ...9E; .
From the above example, we show that the heuristic algorithm can avoid the NP hard problem and generate a granular reduct from the simplified discernability matrix directly. With the heuristic algorithm, the granular reduction theory based on discernability matrix is no longer limited to the theoretic level but applicable in practical usage.
5. Conclusion
In this paper, we develop an algorithm by discernability matrixes to compute all the granular reducts with covering rough sets initially. A simplification of discernibility matrix is proposed for the first time. Moreover, a heuristic algorithm to compute a granular reduct is presented to avoid the NP hard problem in granular reduction such that a granular reduct is generated rapidly.
Acknowledgments
This work is supported by National Natural Science Foundation of China under Grant no. 11201490 and no. 11061004, Science and Technology Plan Projects of Hunan Province no. 2011FJ3152.
[1] Z. Pawlak, "Rough sets," International Journal of Computer and Information Sciences , vol. 11, no. 5, pp. 341-356, 1982.
[2] Z. Pawlak Rough Sets: Theoretical Aspects of Reasoning about Data , Kluwer Acedemic, Boston, Mass, USA, 1991.
[3] G. Dong, J. Han, J. Lam, J. Pei, K. Wang, W. Zou, "Mining constrained gradients in large databases," IEEE Transactions on Knowledge and Data Engineering , vol. 16, no. 8, pp. 922-938, 2004.
[4] S. Pal, P. Mitra, "Case generation using rough sets with fuzzy representation," IEEE Transactions on Knowledge and Data Engineering , vol. 16, no. 3, pp. 292-300, 2004.
[5] L. Polkowski, A. Skowron Rough Sets and Current Trends in Computing , vol. 1424, Springer, Berlin, Germany, 1998.
[6] L. Polkowski, A. Skowron Rough Sets in Knowledge Discovery , vol. 1, Physica-Verlag, Berlin, Germany, 1998.
[7] L. Polkowski, A. Skowron Rough Sets in Knowledge Discovery , vol. 2, Physica-Verlag, Berlin, Germany, 1998.
[8] N. Zhong, Y. Yao, M. Ohshima, "Peculiarity oriented multidatabase mining," IEEE Transactions on Knowledge and Data Engineering , vol. 15, no. 4, pp. 952-960, 2003.
[9] Z. Bonikowski, E. Bryniarski, U. Wybraniec-Skardowska, "Extensions and intentions in the rough set theory," Information Sciences , vol. 107, no. 1-4, pp. 149-167, 1998.
[10] E. Bryniarski, "A calculus of rough sets of the first order," Bulletin of the Polish Academy of Sciences , vol. 37, no. 1-6, pp. 71-78, 1989.
[11] C. Degang, W. Changzhong, H. Qinghua, "A new approach to attribute reduction of consistent and inconsistent covering decision systems with covering rough sets," Information Sciences , vol. 177, no. 17, pp. 3500-3518, 2007.
[12] C. Degang, Z. Wenxiu, D. Yeung, E. C. C. Tsang, "Rough approximations on a complete completely distributive lattice with applications to generalized rough sets," Information Sciences , vol. 176, no. 13, pp. 1829-1848, 2006.
[13] T.-J. Li, Y. Leung, W.-X. Zhang, "Generalized fuzzy rough approximation operators based on fuzzy coverings," International Journal of Approximate Reasoning , vol. 48, no. 3, pp. 836-856, 2008.
[14] T.-J. Li, W.-X. Zhang, "Rough fuzzy approximations on two universes of discourse," Information Sciences , vol. 178, no. 3, pp. 892-906, 2008.
[15] E. C. C. Tsang, C. Degang, D. S. Yeung, "Approximations and reducts with covering generalized rough sets," Computers & Mathematics with Applications , vol. 56, no. 1, pp. 279-289, 2008.
[16] E. Tsang, D. Chen, J. Lee, D. S. Yeung, "On the upper approximations of covering generalized rough sets," in Proceedings of the 3rd International Conference on Machine Learning and Cybernetics, pp. 4200-4203, 2004.
[17] W. Zhu, F.-Y. Wang, "Reduction and axiomization of covering generalized rough sets," Information Sciences , vol. 152, pp. 217-230, 2003.
[18] W. Zhu, "Topological approaches to covering rough sets," Information Sciences , vol. 177, no. 6, pp. 1499-1508, 2007.
[19] W. Zhu, "Relationship between generalized rough sets based on binary relation and covering," Information Sciences , vol. 179, no. 3, pp. 210-225, 2009.
[20] X. Y. Chen, Q. G. Li, "Construction of rough approximations in fuzzy setting," Fuzzy Sets and Systems , vol. 158, no. 23, pp. 2641-2653, 2007.
[21] T. Y. Lin, R. Slowinsk, "Topological and fuzzy rough sets," Intelligent Decision Support: Handbook of Applications and Advances of the Rough Set Theory , pp. 287-304, Kluwer Acedemic, Boston, Mass, USA, 1992.
[22] A. Skowron, J. Stepaniuk, "Tolerance approximation spaces," Fundamenta Informaticae , vol. 27, no. 2-3, pp. 245-253, 1996.
[23] C. Wang, C. Wu, D. Chen, "A systematic study on attribute reduction with rough sets based on general binary relations," Information Sciences , vol. 178, no. 9, pp. 2237-2261, 2008.
[24] Y. Y. Yao, "On generalizing pawlak approximation operators," LNAI , vol. 1424, pp. 298-307, 1998.
[25] Y. Y. Yao, "Constructive and algebraic methods of the theory of rough sets," Information Sciences , vol. 109, no. 1-4, pp. 21-47, 1998.
[26] Y. Y. Yao, "Relational interpretations of neighborhood operators and rough set approximation operators," Information Sciences , vol. 111, no. 1-4, pp. 239-259, 1998.
[27] F. Hu, G. y. Wang, H. Huang, "Incremental attribute reduction based on elementary sets," RSFDGrC , vol. 36, no. 41, pp. 185-193, 2005.
[28] R. Jensen, Q. Shen, "Semantics-preserving dimensionality reduction: rough and fuzzy-rough-based approaches," IEEE Transactions on Knowledge and Data Engineering , vol. 16, no. 12, pp. 1457-1471, 2004.
[29] A. Skowron, C. Rauszer, R. Slowinski, "The discernibility matrices and functions in information systems, intelligent decision suport," Handbook of Applications and Advances of the Rough Sets Theory , Kluwer Academic, Boston, Mass, USA, 1992.
[30] T. Yang, Q. G. Li, "Reduction about approximation spaces of covering generalized rough sets," International Journal of Approximate Reasoning , vol. 51, no. 3, pp. 335-345, 2010.
[31] T. Yang, Q. G. Li, B. L. Zhou, "Granular reducts from the topological view of covering rough sets," in Proceedings of the 8th IEEE International Conference on Granular Computing, Zhejiang University, Hangzhou, China, August 2012.
[32] T. Yang, Q. G. Li, B. L. Zhou, "Related family: a new method for attribute reduction of covering information systems," Information Sciences . In press
[33] S. K. M. Wong, W. Ziarko, "On optimal decision rules in decision tables," Bulletin of the Polish Academy of Sciences , vol. 33, no. 11-12, pp. 693-696, 1985.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2012 Tian Yang et al. Tian Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
The granular reduction is to delete dispensable elements from a covering. It is an efficient method to reduce granular structures and get rid of the redundant information from information systems. In this paper, we develop an algorithm based on discernability matrixes to compute all the granular reducts of covering rough sets. Moreover, a discernibility matrix is simplified to the minimal format. In addition, a heuristic algorithm is proposed as well such that a granular reduct is generated rapidly.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer