Content area
Purpose
The purpose of this paper is to discuss a data interpolation method of curved surfaces from the point of dimension reduction and manifold learning.
Design/methodology/approach
Instead of transmitting data of curved surfaces in 3D space directly, the method transmits data by unfolding 3D curved surfaces into 2D planes by manifold learning algorithms. The similarity between surface unfolding and manifold learning is discussed. Projection ability of several manifold learning algorithms is investigated to unfold curved surface. The algorithms' efficiency and their influences on the accuracy of data transmission are investigated by three examples.
Findings
It is found that the data interpolations using manifold learning algorithms LLE, HLLE and LTSA are efficient and accurate.
Originality/value
The method can improve the accuracies of coupling data interpolation and fluid-structure interaction simulation involving curved surfaces.
1.
Introduction
Fluid-structure interaction (FSI) is phenomenon in engineering practice. It refers to the interaction of structure with internal or surrounding fluid flow, for example, blade of wind turbine system with air, wing of aircraft with air, blade of turbine machine with air and arterial vessels with blood. To fully understand the phenomenon, FSI simulation is performed by coupling a computational fluid dynamics (CFD) mesh model and a computational structure mechanics (CSM) mesh model (Figure 1). The "coupling" means to transmit data between the CFD mesh and the CSM mesh at the common surface of the fluid and the structure, where the common surface is commonly referred to as "interface." The "coupling" also means to make sure that the data are same at the fluid side and the structure side of the interface.
The CFD mesh (for fluid) and the CSM mesh (for structure) are separately developed by different engineers or scientists with their own professional knowledge and experience. The fluid mesh is commonly fine at the interface to get reasonable simulation result. The structure mesh is fine just at some points of stress concentration. There are no direct connection between the CFD mesh and the CSM mesh. The CFD mesh and the CSM mesh is not consistent at the interface (De Boer and Van Zuijlen, 2008; Li et al. , 2007, 2015; Harder and Desmarais, 1972; Smith et al. , 2000; Appa, 1989; Duchon, 2006; Hardy, 1971; Zong-Min, 2002; Jiao and Heath, 2004; Jaiman et al. , 2006; Boer et al. , 2007; Goura et al. , 2001; Farrell et al. , 2009). The thing known is that the nodes of the CFD mesh and the nodes of the CSM mesh are in the interface. In this condition, to couple the meshes together, data (pressure, temperature, deflection, etc.) must be interpolated across the interface between the fluid nodes and the structure nodes (Li et al. , 2015). In the literature, a lot of interpolation methods are proposed (De Boer and Van Zuijlen, 2008; Li et al. , 2007, 2015; Harder and Desmarais, 1972; Smith et al. , 2000; Appa, 1989; Duchon, 2006; Hardy, 1971; Zong-Min, 2002; Jiao and Heath, 2004; Jaiman et al. , 2006; Boer et al. , 2007; Goura et al. , 2001; Farrell et al. , 2009). The nearest-neighbor interpolation (NN) method directly uses the data of the point nearest to a target point as interpolation result (De Boer and Van Zuijlen, 2008; Li et al. , 2015). The polynomial least squares method (L2) fits the polynomial function from the data of several points nearest to a target point by least squares method. Infinite-plate spline (IPS) (Harder and Desmarais, 1972; Smith et al. , 2000) uses superposition of partial differential equation of infinite-plate as a fitting function (Smith et al. , 2000). Appa (Smith et al. , 2000; Appa, 1989) believes that the value of boundary nodes interpolated by IPS is not reliable and proposes the finite-surface spline (FPS) method. Thin-plate splines (TPS) (Smith et al. , 2000; Duchon, 2006) and multiquadric-biharmonic (MQ) (Smith et al. , 2000; Hardy, 1971) are also spline interpolation method. TPS, FPS, IPS and MQ are proved to be of the radial basis function (RBF) method. Martin D. Buhmann and Wu do intensive study on RBF (Zong-Min, 2002), which can be used to deal with large scattered data in many fields. The projection methods, another large group of interpolation methods, map data points from fluid mesh to structure mesh, or vice versa, and then do the FSI interpolation. In PROJECTION method of Jiao and Heath (2004) and Jaiman et al. (2006), a structure node is orthogonally projected to the nearest fluid mesh and the structure node's data value is interpolated in the mesh (Boer et al. , 2007). CVT (another projection method) orthogonally projects a structure node to a fluid mesh and keeps the relationships between the structure node and the nodes of the fluid mesh unchanged (Goura et al. , 2001). Conservative interpolation method (a large group of projection methods) preserves conservation of some properties during interpolation (e.g. total force is conserved, or work performed by the total force is conserved) and is known to improve long-time accuracy and stability of FSI simulation (Farrell et al. , 2009). Fitting methods and projection methods provide good interpolation results for fine meshes. However, for curved interface surfaces of coarse meshes, the accuracy is not always desirable (Li et al. , 2007, 2015). The reason is shown below.
Figure 1 shows 2D diagram of a FSI interpolation problem (Li et al. , 2015). Blue mesh is the structure mesh of a blade. Red mesh is the surrounding fluid mesh around the blade. To transmit the fluid pressure of the red mesh to any node K of the blue mesh, data are needed to be interpolated to the node K from its neighborhood fluid nodes at the interface (Li et al. , 2015), which is circled out by a blue circle. There are four fluid nodes of the interface in the circle. Two are at the left side of the blade and two are at the right side of the blade. From fluid dynamic, it is known that the pressure of the nodes at the left side may be largely different to the pressure of the nodes at the right side. The node K is on the left side. It is obviously that the fluid nodes at the left side should be selected for the pressure interpolation of the node K. However, widely used methods ignore this topological relationship (L2 and RBF, the nodes at the right side are equally selected and used in interpolation) or just uses data of very few nearest nodes (NN, PROJECTION). Li et al. (2015) had proved that ignoring of this topological relationship may induce bad interpolation result or totally wrong interpolation sometimes.
Figure 1 also shows that the nodes of the left right sides can be far separated by unfolding the curved interface surface. Interpolating at the unfolded interface can fix the mistake of ignoring the topological relationship of the curved surface (Li et al. , 2015). By unfolding, all data of the curved surfaces can be used in interpolation simultaneously without mixing their topological relationships. Besides, unfolding will provide other advantage. Traditional methods must interpolate data in 3D (2D) space. Data uniformly distributing over the interface are sparse for the traditional method. By surface unfolding, the interpolation will be done by a 2D (1D) projection of the interface, in which data are uniformly distributed. This may lead to more accurate interpolation even with relatively coarse meshes. So, instead of transferring data in 3D (2D) space, we suggest interpolating data by surface unfolding.
In our former work, a data interpolation method is suggested following the idea above (Li et al. , 2007, 2015). A predefined method (Li et al. , 2007) and a manifold learning algorithm ISOMAP (Li et al. , 2015) are discussed for surface unfolding and are used in FSI data interpolation. These works prove that the idea of interpolation in low-dimensional space is feasible. The predefined method must know geometry of curved surfaces. ISOMAP uses too much time in surface unfolding, which creates a problem. How can the surface be unfolded properly and efficiently? By manifold learning and dimension reduction theory (Zhong and Qin, 2016), it can be known that any curved surface is a 2D intrinsic structure embedded in 3D space. Unfolding curved surface is to find a low-dimensional projection of the surface and is the problem of dimension reduction and manifold learning. Manifold learning algorithms are possible methods to unfolding curved surfaces.
This paper mainly devotes to discussing how to unfold a surface by the manifold learning algorithms in the FSI interpolation problem. The abilities of several manifold learning algorithms are investigated by projecting 3D curved surfaces to 2D space. The interpolation method based on surface unfolding is re-discussed. The influence of the manifold learning algorithms on interpolation accuracy is investigated. The interpolation method is compared with some traditional methods, NN method, L2, RBF and PROJECTION method. Our interpolation method is discussed by examples of FSI problems. However, the method can be equally used in geological exploration, computer vision/image processing, mineral exploration and resource prediction.
2.Data interpolation method by surface unfolding using manifold learning algorithm
In this paper, the interpolation method transfers data of curved surfaces by unfolding interface surfaces (Figure 2) using the manifold learning algorithms. The basic idea (Figure 2) is unfolding the curved interface surface by the manifold learning algorithms, projecting the nodes of the interface surface into a low-dimensional space and then interpolating data between the projected nodes in the low-dimensional space. The method is discussed in detail below. To make it easy to explain, the method is described by transferring data from fluid nodes to structure nodes. However, it is designed for data interpolation between multi-disciplines.
The interpolation method can be divided into five steps (Figure 3):
* Step 1: add data of the fluid nodes and the structure nodes of curved interface surface together.
This paper mainly discusses the FSI problem. In this kind of problems, the fluid mesh (CFD) model and the structure mesh (CSM) model are separately developed by their specialist simulation tools. The nodes of the fluid mesh connect each other by grid lines. The nodes of the structure mesh also connect each other by grid lines. There is not any connection between the fluid nodes and the structure nodes, except that these nodes are in common interface surface. In this condition, the connection relationships between the fluid and the structure nodes must be built by neighborhood relationships. So, the fluid nodes and the structure nodes are added together to find the neighborhood relationships. Besides, to keep the topological relationship between the fluid and the structure nodes during projection, these nodes should be projected to low-dimensional space all at once. So, the fluid nodes and the structure nodes are added together before projection:
* Step 2: project all fluid and structure nodes of the curved surface into low-dimensional space by manifold learning algorithm and get the projected nodes in low-dimensional space.
All fluid and structure nodes of the curved surface are projected into a low-dimensional space together for data interpolation. Each manifold learning algorithm will build a new unknown space. The projection results vary with the manifold learning algorithms and also with point cloud of nodes. So, all nodes should be projected into a same low-dimensional space. This enforces the requirement that the fluid nodes and the structure nodes are projected together.
The intrinsic structure of 3D surfaces is 2D and the intrinsic structure of 2D curved lines is 1D. So, it is just to project coordinates (x , y , z ) of a 3D surface to 2D coordinates (u , v ) or 2D coordinates (x , y ) of line to 1D coordinates (u ) in this paper. Other data (pressure, temperature, deflection, node number, node type (belongs to fluid or structure), etc.) of the curved surfaces or lines just follow the nodes and should not be used in projection process. Otherwise, the low-dimensional space and the low-dimensional projection of the surface may be not understandable by engineers. That is, the dimension reduction is limited to the coordinates in this paper. For this reason, we frequently use the term "surface unfolding" as replacement of projecting surface to plane by the manifold learning algorithms.
Any manifold learning algorithm can be used for surface unfolding here. However, it is found that not all the algorithms can fulfill this task properly. This paper mainly discusses this problem. Which algorithm can unfold the surface properly and efficiently?:
* Step 3: separate the projected points into the fluid nodes and the structure nodes.
During interpolation, it will interpolate data from fluid nodes to structure nodes. So, it should distinguish which nodes are fluid nodes and which nodes are structure nodes. This is performed according to node types after projection:
* Step 4: interpolate the data of the fluid nodes to the structure nodes in low-dimensional space by any interpolation methods.
In low-dimensional space, the FSI interpolation problem becomes very simple. For 3D surface, it is converted to an interpolation problem in plane. For 2D line, it is converted to an interpolation problem in a straight line. The kind of interpolation problems is well addressed in text book. A lot of interpolation or fitting methods can be used:
* Step 5: map the interpolated data of the projected structure nodes to the nodes of original structure mesh model.
The interpolation is performed in low-dimensional space. However, the CFD and the CSM models are of high-dimensional space and simulation is performed in high-dimensional space. So, the interpolated data of the structure nodes should be mapped back to original mesh model. This is performed according to node number.
3.Unfolding ability of manifold learning algorithms
3.1Dimension reduction and manifold learning algorithms
High-dimensional data mean that the data have more than three dimensions or variables, and is difficult to be understood or viewed. If the dimension of the manifold is lower than three, the data can be visualized by media and can be understood by human brain directly. Although the data might be presented in a very high-dimensional space, their dimensions of intrinsic structure are typically much lower. To simplify the problem, it is reasonable to find low-dimensional embedding structure in high-dimensional data. The process of reducing the number of variables of the data is dimensionality reduction or dimension reduction (Agrafiotis et al. , 2010; Coifman and Lafon, 2006; Chen et al. , 2010, 2014; Vanrell et al. , 1997; Horvath et al. , 2016; Wan et al. , 2016; Shao et al. , 2014; Belkin and Niyogi, 2001; Li et al. , 2013; Sammon, 1969; Dybowski et al. , 1996; Tenenbaum et al. , 2000; Wang et al. , 2013, 2015; Silva and Tenenbaum, 2003; Sun et al. , 2014; Roweis and Saul, 2000; Donoho and Grimes, 2006; Wang, 2008; Dong et al. , 2015).
The problem of finding low-dimensional embedding of high-dimensional data gets an increasing importance in face recognition, web data analysis, signal/image processing, machine learning, computer vision and pattern recognition, where the dimensions of data are in thousands, millions, or billions. Conventional analysis tools become severely inadequate for such high-dimensional data. The manifold learning algorithms are suggested to discover the low-dimensional intrinsic manifold of the high-dimensional data (Agrafiotis et al. , 2010; Coifman and Lafon, 2006; Chen et al. , 2010, 2014; Vanrell et al. , 1997; Horvath et al. , 2016; Wan et al. , 2016; Shao et al. , 2014; Belkin and Niyogi, 2001; Li et al. , 2013; Sammon, 1969; Dybowski et al. , 1996; Tenenbaum et al. , 2000; Wang et al. , 2013, 2015; Silva and Tenenbaum, 2003; Sun et al. , 2014; Roweis and Saul, 2000; Donoho and Grimes, 2006; Wang, 2008; Dong et al. , 2015). This low-dimensional manifold then will be used in visualization and feature recognition. So, the dimension reduction method is a preliminary feature extraction step to avoid the curse of dimensionality, after which feature of data is recognized by clustering using a K-nearest neighbors algorithm (Dong et al. , 2015; Lawrence, 2012; Lee and Verleysen, 2007; Ma et al. , 2011).
In the past few years, numerous methods have been developed for learning low-dimensional structure from high-dimensional data. These methods emerge from many different areas. Although they deal with the same kind of problems, these methods apply different mathematical tools, ranging from statistical, geometrical, algebraic, to graphical tools. These research studies have significantly improved the understanding of many new statistical and geometric phenomena in high-dimensional data. As a result, people have developed a variety of effective and efficient algorithms that can learn intrinsic low-dimensional structures of high-dimensional data, even with very limited amount of samples, and even when the samples are incomplete or corrupted (Dong et al. , 2015; Lawrence, 2012; Lee and Verleysen, 2007; Ma et al. , 2011).
Some of the algorithms of manifold learning and dimensionality reduction are described below. For a complete history of them, readers can go over Dong et al. (2015), Lawrence (2012) and Lee and Verleysen (2007):
(1) Stochastic proximity embedding (SPE).
SPE starts with a random initial configuration of data points in low-dimensional space, and adjusts the coordinates of the points so that the distance between any two points in the low-dimensional space is the same as their distance in high-dimensional space (Agrafiotis et al. , 2010). SPE has no parameter.
(2) Diffusion maps (DM).
DM obtains low-dimensional representations of the data set by analogy with the "diffusion processes," in which "diffusion distances" are the probability distribution of those points. In original DM, the "diffusion distance" is equal to the Euclidean distance between points in the high-dimensional space. DM has a parameter, [alpha] , which is to tune the influence of the local point density on the infinitesimal transition of the diffusion (Coifman and Lafon, 2006; Chen et al. , 2014; Nadler et al. , 2005).
(3) Multidimensional scaling (MDS).
MDS constructs a distance matrix of point sets of high-dimensional space and obtains a transform matrix to map the point sets to low-dimensional space by eigendecomposition of the distance matrix (Vanrell et al. , 1997; Horvath et al. , 2016; Chen and Buja, 2009; Pawliczek and Dzwinel, 2010; Rosman et al. , 2006). MDS has no parameter.
(4) Kernel principal component analysis (KPCA).
KPCA is a combination of principal component analysis (PCA) and techniques of kernel method and is the most widely used algorithm for manifold learning. KPCA map points to higher-dimensional space to find a linear hyper-plane of the points. And then projects the points to a low-dimensional space by PCA. It must have a known kernel. KPCA does not yield good results using standard kernels (Wan et al. , 2016; Shao et al. , 2014).
(5) Laplacian eigenmaps (LE).
Traditional techniques such as PCA do not consider the intrinsic geometry of the data. LE builds a graph from neighborhood information of the data. Each data point serves as a node on the graph. The connection between nodes is defined by the neighborhood of the points. LE constructs a weighting matrix for connection between points. If two points are close, then weight is larger than zero. If two points are not close, then the weight is set to be zero. Then, map the points to low-dimensional space by solving generalized eigenvector problem to ensure that the points close to each other in high-dimensional space are close to each other in low-dimensional space (Belkin and Niyogi, 2001; Chen et al. , 2010; Li et al. , 2013; Lewandowski et al. , 2014). The parameter of LE is the number of nearest neighbors, k .
(6) Sammon mapping with geodesic distance (SM).
Sammon's mapping is one of the first and most popular techniques. SM, suggested by Sammon, chooses an initial mapping configuration of points in low-dimensional space and computes errors between the inter-point distances in low-dimensional space and in high-dimensional space. It finds the projection of the points in low-dimensional space by iterating to decrease the errors (Sammon, 1969; Dybowski et al. , 1996; Lennon et al. , 2001; Lee et al. , 2002). SM randomly has two parameters, the tolerance of objective function and the max iteration number.
(7) Isometric mapping (ISOMAP).
ISOMAP is a nonlinear generalization of classical MDS. ISOMAP assumes that the pair-wise distances are only known between neighboring points, and uses the Floyd-Warshall algorithm to compute the pair-wise geodesic distances between all other points. Then, it performs MDS to find low-dimension structure of points to preserve the geodesic distance, not the Euclidean distance, between all pairs of points (Tenenbaum et al. , 2000; Wang et al. , 2013; Silva and Tenenbaum, 2003; Sun et al. , 2014). This geodesic distance reveals the inter-point relationship. The parameter of ISOMAP is the number of nearest neighbors, k .
(8) Landmark isometric mapping (LISOMAP).
LISOAMP is a variant of ISOMAP that uses landmarks to increase speed, at the cost of some accuracy. LISOAMP randomly selects some points from the whole data sets as landmark and builds the geodesic distances between all points of data set through the landmarks. And MDS is performed to find low-dimension structure of points to preserve the geodesic distance between all pairs of points (Silva and Tenenbaum, 2003; Sun et al. , 2014). LISOMAP has two parameters: the number of nearest neighbors, k , and the percentage of the data points, [lambda] , used as landmark.
(9) Locally linear embedding (LLE).
LLE was presented at approximately the same time as ISOMAP (Roweis and Saul, 2000; Wang et al. , 2015). LLE also begins by finding a set of the nearest neighbors of each point. LLE calculates weights of neighborhood distance of points in high-dimensional space, and then finds points in low-dimensional space whose weights are the same as the weights in the high-dimensional space using an eigenvector-based optimization technique. It has several advantages over ISOMAP, including faster optimization (taking advantage of sparse matrix algorithms) and better results with many problems. LLE is poor in handling non-uniform sample densities, because there is no fixed unit to prevent the weights from drifting as regions differ in sample densities (Roweis and Saul, 2000; Wang et al. , 2015; Schölkopf et al. , 2006). The parameter of LLE is the number of nearest neighbors, k .
(10) Hessian locally linear embedding (HLLE).
HLLE is also based on sparse matrix techniques. HLLE constructs a quadratic norm matrix of Hessian by local tangent space of a manifold, and then, maps the points to low-dimensional space by solving the generalized eigenvector problem (Donoho and Grimes, 2006). Unfortunately, it is very costly in computation, so it is not well suited for heavily sampled manifolds. The parameter of HLLE is the number of nearest neighbors, k .
(11) Local tangent space alignment (LTSA).
LTSA finds a local low-dimensional space by neighborhood tangent hyper-planes for every point in the manifold and obtains a global coordinate system by aligning the local coordinate systems of the hyper-planes (Wang, 2008; Dong et al. , 2015). The parameter of LTSA is the number of nearest neighbors, k .
(12) Autoencoder.
Autoencoder is a feed-forward neural network, which is trained to identify an unknown function. Autoencoder chooses and tunes initial weights to construct the "encoder" network, by which it learns to encode data points from the high-dimensional input to low-dimensional output and learns to decode the data points back from the high-dimensional input to low-dimensional output (Hinton and Salakhutdinov, 2006; Wang et al. , 2012; Lowe and Tipping, 1999). Autoencoder has no parameter.
(13) t-Distributed stochastic neighbor embedding (t-SNE).
t-SNE has two steps. First, t-SNE constructs a probability distribution over pairs of high-dimensional points that have a high probability of being picked. Second, t-SNE defines a similar probability distribution over the points in low-dimensional space, and minimizes the Kullback-Leibler divergence between the two distributions to adjust the points in low-dimensional space (Laurens and Hinton, 2012; ). t-SNE has two parameters: the perplexity of the Gaussian kernel and iteration times.
As manifold learning and nonlinear dimensionality reduction are of great research interest, the number of algorithms is increasing every year, for example, self-organizing map (SOM) (Kohonen, 1990), generative topographic mapping (Bishop et al. , 1997), manifold alignment (Wang et al. , 2011), manifold sculpting (Gashler and Ventura, 2007), maximum variance unfolding (MVU) (Weinberger and Saul, 2006), landmark-MVU (Gashler et al. , 2011) and RankVisu (Lespinats et al. , 2009). For a more thorough review of manifold learning and dimensionality reduction, readers can see Lee and Verleysen (2007) and van der Maaten et al. (2009).
3.2Comparison of unfolding ability of the manifold learning algorithms
From geometric point of view, manifold learning is to find a lower-dimensional projection of a group of high-dimensional points, which retains the proximity characteristic of the high-dimensional points (the neighbor points in high-dimensional space remain to be next to each other in lower-dimensional space). Surface unfolding is just this kind of problem. It is to find 2D projection of the 3D curved surface.
This section compares the ability of the manifold learning algorithms in unfolding curved surface. Our initial test uses a 3D full cylindrical surface. This cylindrical surface is mapped into a 2D planar space by the manifold learning algorithms discussed in Section 3.1. However, all of algorithms failed. By re-considering the case and the algorithms, we found that circular shape of the surface muddles the inter-point distances. For any point pair in the surface, there are two geodesic distances in different direction. This is the reason of failure of all algorithms. It can be proved by the example of one-end closed cylinder surface, in which any point pair has only one geodesic distance.
Then, we cut out a 3/4 cylindrical surface as the test case, shown in Figure 4(a). Points of the surface are uniformly distributed at the surface. These points are marked by color to compare their topological relationships in low-dimensional space and in high-dimensional space. As the points belong to the cylindrical surface, it can be imagined that their projections should form a rectangle in 2D space. So, a good manifold learning algorithm has the following characteristics: the projection of the cylindrical surface by the algorithm is rectangle; the projected points are uniformly distributed and there is no overlap in the points projected by the algorithm. Only part of the algorithms can meet the requirements. So, several algorithms are selected out for unfolding curved surfaces in data interpolation.
The projected points of the cylindrical surface are shown in Figure 4(b)-(n). The projections are mapped by SPE, DM, MDS, KPCA, LE, SM, ISOMAP, LISOMAP, LLE, HLLE, LTSA, Autoencoder and t-SNE. Unfolding is performed by a MATLAB code developed by van der Maaten et al. (2009) (). The parameters of the algorithms are adjusted carefully to ensure a best projection for the cylindrical surface.
Figure 4(b) shows the points projected by SPE. From the figure, it can be seen that the points are seriously mixed at the incision of the surface. This means that the projected surface overlaps. SPE adjusts the coordinates of the projected points so that the inter-point distances in low-dimensional space are the same as in high-dimensional space. Points being close in high-dimensional space try to be close in low-dimensional space. In SPE, 3D Euclidean distance in high-dimensional space is used as the inter-point distance. From Figure 4(a), it can be seen the distance between two incisions are much less than the distances between most of the points. The points of the incisions try to be close to each other. So, there are serious overlaps.
Figure 4(c) shows the projected points of the surface by DM. From the figure, it can be seen that the projected surface has overlap. DM also judges the similarity of the projected points and the original points by Euclidean distance. Points close in Euclidean space also try to be close in low-dimensional space. The points of the two incisions try to be close to each other in the DM. So, there are serious overlaps. Parameter [alpha] is set to 1.
Figure 4(d) shows the projected points of the surface by MDS. From the figure, it can be seen that the projected plane by MDS is nearly a rectangle. There is a curl at the incisions. MDS also judges the similarity of the projected points and the 3D points by Euclidean distance. Points close in Euclidean space also try to be close in low-dimensional space. Different to SPE and DM, MDS judges dissimilarities by a loss function, which weakens the requirement of the inter-point distances. So, MDS gets better performance than SPE and DM in the test case.
Figure 4(e) shows the projected points of the surface by KPCA. From the figure, it can be seen that there is a serious curl. KPCA is a nonlinear form of PCA. However, in the test case, it gets worse result. KPCA judges the similarity by hyperspace distance of the points. The hyperspace and its kernels are affects influencing the performance of KPCA. In the test, Gaussian kernel is used in the case.
Figure 4(f) shows the projected points of the surface by LE. From the figure, it can be seen that the projected plane is rectangle and there are very slight overlaps at the four edges. LE judges the similarity by the inter-point distances of the neighborhood points, not by the inter-point distances of all points. So, the projected plane is globally correct. The inter-point relationship is stored by the inter-point distances. The points of the boundary edges have no constraints on one side by other points. So, there are errors at the edges. SPE, DM and KPCA should have this problem. However, the error induced by the strong requirement of inter-point distances between all points are more large. So, the error at the edges is not obvious. The parameter of LE is the number of nearest neighbors, k . It is set to 8.
Figure 4(g) shows the projected points of the surface by SM. From the figure, it can be seen that the projected plane is rectangle and there are very slight overlaps at the four edges. As described above, SM judges dissimilarities between inter-point distances by error function, which also weaken the requirement of the inter-point distances. So, the projected plane is globally correct. Similar to LE, the boundary edges have no constraints on one side, and there are errors at the edges. In the case, SM's relative tolerance of objective function is 1e-9 and the max iteration number is 5,000.
Figure 4(h) shows the projected points of the surface by ISOMAP. ISOMAP rebuilds the inter-point relationship in high-dimensional space and then do the projection according to this relationship. It does not use the inter-point distances for judging. So, the relationship between the projected points is always correct and there is no overlap. The surface is not rectangle strictly. The parameter, the number of nearest neighbors, k , is 9.
Figure 4(i) shows the projected points of the surface by LISOMAP. LISOMAP is the landmark ISOMAP. It speeds up in regenerating the inter-point relationship. So, it has similar result to ISOMAP. The parameter, the number of nearest neighbors, k , is 9. The percentage of the data points, [lambda], is 1 percent.
Figure 4(j) shows the projected points of the surface by LLE. From the figure, it can be seen that the projected points are rectangle and there is no overlap. LLE judges dissimilarities by local inter-point distances of neighborhood. So, the projected plane is globally correct. The parameter, the number of nearest neighbors, k , is 6.
Figure 4(k) shows the projected points of the surface by HLLE. From the figure, it can be seen that the projected points are not mixed. HLLE projects according to local tangent plane of points. So, the projected plane is globally correct and there is no overlap. The parameter, the number of nearest neighbors, k , is 12.
Figure 4(l) shows the projected points of the surface by LTSA. From the figure, it can be seen that the projected points are rectangle and there is no overlap. LTSA projects according to the neighborhood tangent coordinates system of every point. So, the projected plane is globally correct and there is no overlap. The parameter, the number of nearest neighbors, k , is 5.
Figure 4(m) shows the projected points of the surface by Autoencoder. The projected surface does not keep the topological relationships of the surface. In the figure, projected points are too close.
Figure 4(n) shows the projected points of the surface by t-SNE. Similar to SM, t-SNE adjusts the projected points by iteration. It can be seen that the projected surface has two groups and some points are isolated into wrong place. This is induced by the initial condition and the iteration number. So, the unfolding ability of t-SNE is unsatisfactory. For example, we set the perplexity of Gaussian kernel to 30 and set the iteration number to 20,000. Largely increasing the iteration number may improve the results.
By the test, it can be seen that ISOMAP, LISOMAP, LLE, HLLE and LTSA are feasible algorithms. Other method is not acceptable in this case. Generally speaking, the manifold learning algorithms, which using local relationship information, are more feasible in unfolding the surface.
4.Pressure interpolation of blade by manifold learning
This section continues to discuss the unfolding ability of the manifold learning algorithms by example of a blade and then studies application of the algorithms in data transmitting of the blade.
Figure 5 shows the CFD mesh model of the blade. The CFD mesh is built by ANSYS.GAMBIT. The pressure is calculated by the CFD model in ANSYS.FLUENT. Figure 6 shows the CSM mesh of the blade built by ANSYS.MULTIPHYSICS. The fluid-structure interface surface is marked in Figures 5 and 6. There are 14,652 fluid nodes of the CFD mesh and 5,791 structure nodes of the CSM mesh at the fluid-structure interface surface. Under the CFD pressure, the blade will deflect. To calculate the deflection of the blade, the CFD pressure is to be transmitted to the CSM meshes as load. As the CFD mesh is different from the CSM mesh at the interface, the pressure is needed to be interpolated from the fluid nodes to the structure nodes at the blade interface. This is performed by our interpolation method of unfolding interface by manifold learning algorithms, discussed in Section 2.
All the manifold learning algorithms tested in Section 3 are used to unfold the 3D blade interface surface. As the blade interface surface is isomorphic to full cylindrical surface, all manifold learning algorithms cannot unfold the blade interface directly. So, the blade interface surface is divided into two parts by a cutting plane as shown in Figures 5 and 6. This cutting plane is selected to break the circle of the blade interface surface along spanwise. Then, the fluid nodes and the structure nodes of the interface are divided into two groups. Figure 7 shows the fluid and the structure nodes of the left group. Figure 8 shows the right group. The fluid nodes and the structure nodes are added together to be projected to a plane all at once. Figure 7(a) shows the added fluid and structure nodes of the left group. Figure 8(a) shows the added fluid and structure nodes of the right group. The nodes of Figures 7(a) and 8(a) can be separated into fluid nodes and structure nodes. Figure 7(b) shows the fluid nodes of the left group. Figure 7(c) shows the structure nodes of the left group. Figure 8(b) shows the fluid nodes of the right group. Figure 8(c) shows the structure nodes of the right group. Nodes of Figures 7 and 8 are marked by color according to the type of mesh, fluid nodes or structure nodes.
4.1Unfolding of the blade interface surface
The fluid nodes and the structure nodes of Figures 7(a) and 8(a) are projected to a plane all at once. Figure 9 also shows the added nodes of the left group and the right group again. In Figure 9, the nodes are marked by color to show their topological relationships. The nodes are projected into 2D space by the manifold learning algorithms, as shown in Figure 10. Unfolding is performed by a MATLAB code developed by van der Maaten et al. (2009) (). The parameters of the algorithms are adjusted carefully to ensure a best projection for the case.
Figure 10(a) shows the points projected by SPE. From the figure, it can be seen that there is serious overlap. SPE adjusts the coordinates of the points according to inter-point Euclidean distances. From Figure 9, it can be seen that the inter-point Euclidean distances of most points are much less than their inter-point geometric distances. So, there is overlap. This is also true to DM (Figure 10(b)). DM's parameter [alpha] is set to 1.
Figure 10(c) shows the points projected by MDS. The method judges dissimilarities of projection and original surface by a loss function, which weakens the requirement of the inter-point distances. MDS should get better performance. However, MDS also projects points according to Euclidean distance. From Figure 9, it can be seen that the inter-point Euclidean distances of most points are much less than their inter-point geometric distances. So, there is serious overlap from the figure.
Figure 10(d) shows the points projected by KPCA. KPCA depends on selection of kernels and projection space. In this test case, Gaussian kernel is used and it just gets a result like MDS.
Figure 10(e) shows the points projected by LE. It gets similar result as in the example of the 3/4 cylindrical surface. There are slight overlaps at the edges. LE judges the similarity by the inter-point distances of the neighborhood points not by the inter-point distances of all points. So, the projected plane is globally correct. The points of the boundary edges have no constraints on one side. So, there are errors at the edges. The parameter, the number of nearest neighbors in LE, k , is 8.
Figure 9(f) shows the points projected by SM. From the figure, it can be seen that there are slight overlaps at the four edges and several isolated points. SM judges dissimilarities between inter-point distances by error function, which weakens the requirement of the inter-point distances. So, the projected plane is globally correct and there are errors at the edges and isolated points. Isolated points are induced by the initial condition and the iteration number. Increasing iteration number may improve the result. In the test case, SM's relative tolerance of objective function is 1e-9 and the max iteration number is 5,000.
Figure 9(g) shows the points projected by ISOMAP. The parameter, the number of nearest neighbors, k , is 8. ISOMAP rebuilds the inter-point relationship in high-dimensional space and then do the projection according to this relationship. So, the relationship between the projected points is the same as that in high dimension. That is true for LISOMAP (Figure 9(h)). The parameter, the number of nearest neighbors, k , for LISOMAP is 6. The percentage of the data points, [lambda] , for LISOMAP is 1 percent.
Figure 9(i) shows the points projected by LLE. LLE judges dissimilarities by local inter-point distances of neighborhood. So, the projected points are correct. The parameter, the number of nearest neighbors, k , is 5.
Figure 9(j) shows the points projected by HLLE. HLLE projects according to the local tangent plane coordinates system of points. So, the projected points are globally correct and uniformly distributed. The number of nearest neighbors, k , is set to 7.
Figure 9(k) shows the points projected by LTSA. LTSA projects by aligning neighborhood tangent plane of points. So, the projected points are similar to HLLE, are globally correct and uniformly distributed. The number of nearest neighbors, k , is 5.
Figure 9(l) shows the points projected by Autoencoder. The result is a line projection of the points. Autoencoder is based on nonlinear model identification, which is still a big problem in the related research area.
Figure 9(m) shows the points projected by t-SNE. The result is not good. Similar to SM, t-SNE adjusts the projected points by iteration. Isolated points and errors are induced by the initial condition and the iteration number. Increasing iteration number may improve the result. In the example, we set the perplexity of Gaussian kernel to 30 and set the iteration times to 5,000.
Generally, the result is similar to example of unfolding the 3/4 cylindrical surface. Algorithms using neighborhood or local inter-point relationship are feasible. ISOMAP, LISOMAP, LLE, HLLE and LTSA are feasible for unfolding the blade interface surface. The unfolded nodes by these five methods will be used in pressure interpolation of the blade interface surface below.
4.2Efficiency of manifold learning algorithms
Another factor influencing our data interpolation method is the time used in the surface unfolding process. The efficiency of the manifold learning algorithms is compared by the time used in unfolding the interface of the blade. Table I shows the unfolding times used by the five feasible manifold learning algorithms. The time is the total time in unfolding two parts of the blade interface. It can be seen that LLE, HLLE and LTSA use just 25-36 seconds. ISOMAP uses 8.09 hours and LISOMAP uses 89.47 minutes. So, LLE, HLLE and LTSA are efficient in this case.
4.3Pressure interpolation of blade interface surface
The pressure of the blade interface is interpolated from the fluid nodes (Figure 11) to the structure nodes by our interpolation method. As SPE, DM, MDS, KPCA, LE, SM, Autoencoder and t-SNE are failed in the test case, only projection results of ISOMAP, LISOMAP, LLE, HLLE and LTSA are used in the pressure interpolation.
Figure 12 shows the projection nodes of ISOMAP, LISOMAP, LLE, HLLE and LTSA, marked by type of the nodes. The fluid nodes are marked by pink and the structure nodes are marked by blue. Figure 12(a) shows all fluid and structure nodes projected, the fluid nodes projected and the structure nodes projected by ISOMAP. Figure 12(b) shows all fluid and structure nodes projected, the fluid nodes projected and the structure nodes projected by LISOMAP. Figure 12(c) shows all fluid and structure nodes projected, the fluid nodes projected and the structure nodes projected by LLE. Figure 12(d) shows all fluid and structure nodes projected, the fluid nodes projected and the structure nodes projected by HLLE. Figure 12(e) shows all fluid and structure nodes projected, the fluid nodes projected and the structure nodes projected by LTSA.
Figure 13 shows the fluid pressure of the projected nodes in planar space. Figure 13(a) shows the fluid pressure of the projected nodes by ISOMAP. Figure 13(b) shows the fluid pressure of the projected nodes by LISOMAP. Figure 13(c) shows the fluid pressure of the projected nodes by LLE. Figure 13(d) shows the fluid pressure of the projected nodes by HLLE. Figure 13(e) shows the fluid pressure of the projected nodes by LTSA. This pressure will be interpolated to the projected structure nodes of Figure 12.
The fluid pressure of the blade is interpolated to the structure nodes for the left and the right groups, respectively, in 2D space and the resulted structure pressures of the two groups are added together as the final interpolation result.
The third factor affecting our data interpolation method is accuracy. The same polynomial least square method in 2D space is used as the interpolation method in 2D space for all nodes projected. The polynomial function is p =au 2 +bv 2 +cuv +du +ev +f , where u and v are the coordinates of nodes in 2D space; a, b, c, d, e and f are coefficients of the polynomial function. The coefficients are fitted by the least squares method. The data interpolation in 2D space is following the steps of Li et al. (2007, 2015):
Step 1. For a structure node X j (u j , v j ) in planar space, j =1, 2, ..., n , select the l nearest fluid nodes, a 1 , a 2 , ..., a l (a i =(u i , v i ), i =1, 2, ..., l ). The pressure of the fluid nodes is p 1 , p 2 , ..., p l .
Step 2. Let:
[Formula omitted: See PDF]
A=[
| u12,v12,u1v1,u1,v1,1 |
| u22,v22,u2v2,u2,v2,1 |
| ... |
| ul2,vl2,ulvl,ul,vl,1 |
],B=[
| a |
| b |
| c |
| d |
| e |
| f |
],andP=[
| p1 |
| p2 |
| ... |
| pl |
].
[figure omitted; refer to PDF]
Step 3. The coefficients matrix B of the polynomial function can be solved by the least square method, B =(A T A )-1 A T P .
Step 4. Calculate the pressure value of the selected structure node X j (u j , v j ) by the polynomial function p=auj2+bvj2+cujvj+duj+evj+f [figure omitted; refer to PDF] .
Step 5. Repeat Steps 1-4 for every node to get the pressure.
The interpolation process is programmed in MATLAB and is linked to the surface unfolding codes. By the program, the pressure is interpolated in 2D space. Figure 14 shows the interpolated pressure of the structure nodes projected by ISOMAP, LISOMAP, LLE, HLLE, and LTSA, respectively. The pressure results are shown by the left group and the right group, respectively. The combination of the pressure of all structure nodes is assigned to the nodes of CSM model as load.
Figure 15(a)-(e) shows the final CSM pressure transmitted by our interpolation method using the five manifold learning algorithms. By comparing these figures with Figure 11, it can be seen that the interpolated structure pressures are the same as the fluid pressure. By this, it can be concluded that the proposed interpolation method is very accurate. Using each of the five manifold learning algorithms can get very accurate result. There is no big influence on the accuracy of our method in using these manifold learning algorithms. For comparison, the pressure is also transmitted by traditional interpolation methods, NN, 3D L2, PROJECTION method and RBF. The transmitted results are also shown in Figure 15(f)-(i).
To investigate the accuracy of the interpolation, interpolation errors are calculated following the steps of Li et al. (2007, 2015):
Step 1. Interpolates the original CFD pressure of the fluid nodes P a to the structure nodes by an interpolation method above; gets the interpolated pressure of the structure nodes P s .
Step 2. Interpolates the structure pressure P s back to the fluid nodes by the same interpolation method; gets the interpolated pressure of the fluid nodes P 'a .
Step 3. The interpolation error can be calculated by the difference between the interpolated pressure and the original pressures of the fluid node. It is obvious that the interpolated error is doubled by two times of interpolation. The absolute error for each node is calculated by E abs =(|P a -P 'a |)/(2) and the relative error is calculated by E rel =(|P a -P 'a |)/(2a ).
Figures 16 and 17 show the absolute and the relative errors of the interpolations. From the figures, it can be seen that the largest error is at the leading and the trailing edges of the blade, where the curvature is large and the mesh is relatively coarse. The maximum errors are different for the interpolation methods using the five feasible manifold learning algorithms. The interpolation using LLE has the largest error. But the difference is not so large in our methods. Comparing with the traditional methods, the errors are smaller in general. At the leading edge and the training edge, the traditional methods get worse results than our method. NN has the worst interpolation result even with these fine CFD and CSM meshes.
Table II provides the interpolation errors of the traditional interpolation methods and our methods by surface unfolding using the manifold learning algorithms. As shown in the table, RBF is the most accurate of the traditional methods, and its relative error is 3.97 percent. NN is the worst of the traditional methods, and its relative error is 14.97 percent. The projection method failed at three points, which is well addressed in the literature (Li et al. , 2015; Smith et al. , 2000; Jiao and Heath, 2004; Jaiman et al. , 2006; Boer et al. , 2007). The best of our methods is the ones using LISOMAP and HLLE, and the relative error is 2.31 percent. The worst of our methods is the one using LLE, and the relative error is 3.93 percent. But the difference is not so large in our methods. These results can also be read in Figures 16 and 17. It can be concluded the unfolding process can improve interpolation accuracy.
Figure 18 shows the influence on errors of the node number l used in interpolation by our methods. Figure 18(a) shows the influence on the max absolute error. Figure 18(b) shows the influence on max relative error. From Figure 18, it can be seen that:
1. the accuracy increases with the node number l ;
2. if the node number l is larger than 13, good accuracy can be achieved by our interpolation methods using any of the five manifold learning algorithms; and
3. if the node number l is larger than 9, good accuracy can be achieved by our interpolation method using ISOMAP, LISOMAP, HLLE and LTSA.
Generally, the influence on the interpolation accuracy of manifold learning algorithms is small in our method, if the surface is properly unfolded. The time used in unfolding is more important. So, considering accuracy and efficiency, this paper suggests the method using LLE, HLLE and LTSA in surface unfolding. If the interpolation uses polynomial least square method in 2D space, the node number used should large than 13 in interpolation.
5.Pressure interpolation of nozzle
Unfolding ability of the manifold learning algorithms and the accuracy of our interpolation method are continued to be investigated by pressure interpolation of a nozzle.
The pressure of the nozzle is interpolated from the nodes of the fluid mesh (Figure 19) to the nodes of the structure mesh (Figure 20) of the interface. There are 6,016 fluid nodes and 3,332 structure nodes at the interface. The nodes are uniformly distributed. Figure 21 shows all nodes of the fluid and the structure nodes added together. Nodes of Figure 21 are marked by color according to the type of mesh, fluid nodes or structure nodes.
5.1Unfolding of 1/4 nozzle interface
All interface nodes are marked by color to compare the topological structure in high-dimensional space with the low-dimensional space (Figure 22(a)). Figure 22(b)-(f) shows the projected points by ISOMAP, LISOMAP, LLE, HLLE and LTSA. The parameters used by the algorithms are shown in the figures. From these figures, it can be seen that these five algorithms are still feasible in unfolding the nozzle interface surface. However, the nodes projected by ISOMAP are not uniformly distributed in 2D space. This is different to the distribution of the nodes in 3D space. Other four algorithms are well in the case.
Table III shows the times used in unfolding the 1/4 nozzle interface. The results are similar to that of the unfolding of the blade interface. LLE, HLLE and LTSA are more efficient than ISOMAP and LISOMAP.
5.2Pressure interpolation of nozzle using surface unfolding method
The fluid pressure of the nozzle interface is shown in Figure 23. The fluid pressure is interpolated to the structure nodes following the same steps in Sections 2 and 4. Figure 24 shows the interpolated structure pressure. By comparing these figures with Figure 23, it can be seen that the interpolated pressures are the same as the pressure of the fluid nodes. For comparison, the pressure is also transmitted by the traditional methods, NN, 3D L2, PROJECTION method and RBF. It can be seen that the interpolated structure pressures are the same as the fluid pressure.
The interpolation errors are estimated by the method in Section 4.3. Figure 25 shows the absolute errors. Figure 26 shows the relative errors. From Figures 25 and 26, it can be seen that large errors are at the nozzle throat, where the pressure is varying largely and the curvature is large. Table IV also provides the max errors of the four traditional methods and our methods. The most accurate one of the traditional interpolation methods is RBF and its relative error is 3.11 percent. The interpolation by NN is worst. Its relative error is 5.16 percent. The most accurate one of our interpolation methods is the one unfolding surface by LISOMAP, and its relative error is 2.14 percent. All of our methods are more accurate than traditional methods. The accuracy of our methods is not varying largely with changing of manifold learning algorithm. So, the influence on the interpolation accuracy of manifold learning algorithms is small in our method. By this, we can conclude that our interpolation methods can get more accurate results and our interpolation method of unfolding the surface is more stable.
Figure 27 shows the influence of the node number l used in interpolation in our method on the errors. Figure 27(a) shows the influence on the max absolute error. Figure 27(b) shows the influence on max relative error. From Figure 27, it can be seen that:
1. the accuracy increases with the node number l ; and
2. if the node number l is larger than 12, good accuracy can be achieved by our interpolation method by anyone of the five manifold learning algorithms.
As the influence of the manifold learning algorithms on the interpolation accuracy is small in our method, the time used in unfolding is more important. So, considering accuracy and efficiency, this paper suggests using LLE, HLLE and LTSA in surface unfolding. If the interpolation uses polynomial least square method, the node number used in interpolation should be more than 12.
6.Conclusion
In our former works, an FSI data interpolation method is proposed based on curved surface unfolding. The key intuition of the method is that any 3D surface possesses a 2D intrinsic structure. Instead of transferring data in 3D space directly, our method finds 2D projections of the 3D curved surfaces, and interpolates data by the 2D projections.
In this paper, surface unfolding process is compared with manifold learning. We suggest using manifold learning algorithms in the interpolation of surface unfolding. The surface unfolding ability of several manifold learning algorithms is investigated by examples of cylindrical surface, blade FSI interface and nozzle FSI interface. By this, we found that five manifold learning algorithms, ISOMAP, LISOMAP, LLE, HLLE and LTSA, can be used in our FSI problem directly, in which, ISOMAP and LISOMAP are time consuming.
The influence of the five manifold learning algorithms on the accuracy of interpolation is investigated. It is found that the influence of the manifold learning algorithms on accuracy is small in our interpolation method. We can use any manifold learning algorithms, which unfolds interface surfaces properly. Considering accuracy and efficiency, this paper suggests using LLE, HLLE and LTSA in surface unfolding.
For comparison, the pressures are also transmitted by four traditional methods, NN, L2 in 3D space, PROJECTION method and RBF. Comparing to the traditional interpolation methods, our data interpolation method based on surface unfolding is more accurate.
Our interpolation method is discussed by examples of FSI problems. However, the method can be equally used in geological exploration, computer vision/image processing, mineral exploration and resource prediction.
[Figures and tables omitted: See PDF]
References
Agrafiotis, D.K., Xu, H., Zhu, F., Bandyopadhyay, D. and Liu, P., (2010), "Stochastic proximity embedding: methods and applications", Molecular Informatics, Vol. 29 no. 11, pp. 758-770.
Appa, K., (1989), "Finite-surface spline", Journal of Aircraft, Vol. 26 no. 5, pp. 495-496.
Belkin, M. and Niyogi, P., (2001), "Laplacian eigenmaps and spectral techniques for embedding and clustering", in Becker, S., Thrun, S. and Obermayer, K., (Eds), Advances in Neural Information Processing System 14 (NIPS 2001), Neural Information Processing Systems Foundation Inc., Vancouver, pp. 585-591.
Bishop, C.M., Svensén, M. and Williams, C.K.I., (1997), "GTM: the generative topographic mapping", Neural Computation, Vol. 10 no. 1, pp. 215-234.
Boer, A.D., Zuijlen, A.H.V. and Bijl, H., (2007), "Review of coupling methods for non-matching meshes", Computer Methods in Applied Mechanics & Engineering, Vol. 196 no. 8, pp. 1515-1525.
Chen, C., Zhang, L., Bu, J., Wang, C. and Chen, W., (2010), "Constrained Laplacian eigenmap for dimensionality reduction", Neurocomputing, Vol. 73 Nos no. 4-6, pp. 951-958.
Chen, K.K., Hung, C., Soong, B.W., Wu, H.M., Wu, Y.T. and Wang, P.S., (2014), "Data classification with modified density weighted distance measure for diffusion maps", Journal of Biosciences & Medicines, Vol. 2 no. 4, pp. 12-18.
Chen, L. and Buja, A., (2009), "Local multidimensional scaling for nonlinear dimension reduction, graph drawing, and proximity analysis", Journal of the American Statistical Association, Vol. 104 no. 485, pp. 209-219.
Coifman, R.R. and Lafon, S., (2006), "Diffusion maps", Applied and Computational Harmonic Analysis, Vol. 21 no. 1, pp. 5-30.
De Boer, A., Van Zuijlen, A.H. and Bijl, H., (2008), "Comparison of conservative and consistent approaches for the coupling of non-matching meshes", Computer Methods in Applied Mechanics and Engineering, Vol. 197 Nos no. 49-50, pp. 4284-4297.
Dong, S., Chen, L., Tang, B., Xu, X., Gao, Z. and Liu, J., (2015), "Rotating machine fault diagnosis based on optimal morphological filter and local tangent space alignment", Article ID 893504, Shock & Vibration, Vol. 2015, pp. 1-9.
Donoho, D.L. and Grimes, C., (2006), "Hessian eigenmaps: locally linear embedding techniques for high-dimensional data", Proceedings of the National Academy of Sciences of the United States of America, Vol. 100 no. 10, pp. 5591-5596.
Duchon, J., (2006), "Splines minimizing rotation-invariant semi-norms in Sobolev spaces", in Schempp, W. and Zeller, K., (Eds), Constructive Theory of Functions of Several Variables, Springer-Verlag, Oberwolfach, pp. 85-100.
Dybowski, R., Collins, T.D., Hall, W. and Weller, P., (1996), "Visualization of binary string convergence by Sammon mapping", in Fogel, L.J., Angeline, P.J. and Back, T., (Eds), Evolutionary programming V, Proceedings of the Fifth Annual Conference on Evolutionary Programming, MIT Press, San Diego, pp. 377-384.
Farrell, P.E., Piggott, M.D., Pain, C.C., Gorman, G.J. and Wilson, C.R., (2009), "Conservative interpolation between unstructured meshes via supermesh construction", Computer Methods in Applied Mechanics & Engineering, Vol. 198 Nos no. s33â[euro]36, pp. 2632-2642.
Gashler, M. and Ventura, D., (2007), "Iterative non-linear dimensionality reduction with manifold sculpting", Systems 20 (NIPS 2007),in Platt, J.C., Koller, D., Singer, Y. and Roweis, S.T., (Eds), Advances in Neural Information Processing Systems, MIT Press, Vancouver, Vol. 20, pp. 513-520.
Gashler, M., Ventura, D. and Martinez, T., (2011), "Manifold learning by graduated optimization", IEEE Transactions on Systems Man & Cybernetics Part B, Vol. 41 no. 6, pp. 1458-1470.
Goura, G., Badcock, K.J., Woodgate, M.A. and Richards, B.E., (2001), "A data exchange method for fluid-structure interaction problems", Aeronautical Journal, Vol. 105 no. 1, pp. 215-221.
Harder, R.L. and Desmarais, R.N., (1972), "Interpolation using surface splines", Journal of Aircraft, Vol. 9 no. 2, pp. 189-191.
Hardy, R.L., (1971), "Multiquadric equations of topography and other irregular surfaces", Journal of Geophysical Research Atmospheres, Vol. 76 no. 8, pp. 1905-1915.
Hinton, G.E. and Salakhutdinov, R.R., (2006), "Reducing the dimensionality of data with neural networks", Science, Vol. 313 no. 5786, pp. 504-511.
Horvath, D., Ulicny, J. and Brutovsky, B., (2016), "Self-organised manifold learning and heuristic charting via adaptive metrics", Connection Science, Vol. 18 no. 1, pp. 1-26.
Jaiman, R.K., Jiao, X., Geubelle, P.H. and Loth, E., (2006), "Conservative load transfer along curved fluid-solid surface with non-matching meshes", Journal of Computational Physics, Vol. 218 no. 1, pp. 372-397.
Jiao, X. and Heath, M.T., (2004), "Common-refinement-based data transfer between non-matching meshes in multiphysics simulations", International Journal for Numerical Methods in Engineering, Vol. 61 no. 14, pp. 2402-2427.
Kohonen, T., (1990), "Self-organizing map", Proceedings of the IEEE, Vol. 78 no. 9, pp. 1464-1480.
Laurens, V.D.M. and Hinton, G., (2008), "Visualizing data using t-SNE", Journal of Machine Learning Research, Vol. 9 no. 11, pp. 2579-2605.
Lawrence, N.D., (2012), "A unifying probabilistic perspective for spectral dimensionality reduction: insights and new models", Journal of Machine Learning Research, Vol. 13 no. 1, pp. 1609-1638.
Lee, J.A. and Verleysen, M., (2007), "Nonlinear dimensionality reduction", Springer.
Lee, J.A., Lendasse, A. and Verleysen, M., (2002), "Curvilinear distance analysis versus isomap", in Verleysen, M., (Ed.), Proceedings of ESANN'2000, Eighth European Symposium on Artificial Neural Networks, D-Facto Publications, pp. 13-20.
Lennon, M., Mercier, G., Mouchot, M. and Hubert-Moy, L., (2001), "Curvilinear component analysis for nonlinear dimensionality reduction of hyperspectral images", in Serpico, S.B., (Ed.), SPIE Proceedings 4541: Image and Signal Processing for Remote Sensing VII, SPIE Press, Toulouse, pp. 157-168.
Lespinats, S., Fertil, B. and Villemain, P., (2009), "RankVisu: mapping from the neighborhood network", Neurocomputing, Vol. 72 Nos no. 13-15, pp. 2964-2978.
Lewandowski, M., Makris, D., Velastin, S.A. and Nebel, J.C., (2014), "Structural Laplacian eigenmaps for modeling sets of multivariate sequences", IEEE Transactions on Cybernetics, Vol. 44 no. 6, pp. 936-949.
Li, L.-z., Zhang, J., Zhao, J.-l. and Yue, Z.-f., (2015), "An enhanced 3D data transfer method for fluid-structure surface by ISOMAP nonlinear space dimension reduction", Advances in Engineering Software, Vol. 83 no. 5, pp. 19-30.
Li, L.Z., Lu, Z., Wang, J. and Yue, Z., (2007), "Turbine blade temperature transfer using the load surface method", Computer-Aided Design, Vol. 39 no. 6, pp. 494-505.
Li, S., Wang, Z. and Li, Y., (2013), "Using Laplacian eigenmap as heuristic information to solve nonlinear constraints defined on a graph and its application in distributed range-free localization of wireless sensor networks", Neural Processing Letters, Vol. 37 no. 37, pp. 411-424.
Lowe, D. and Tipping, M.E., (1999), "Neuroscale: novel topographic feature extraction with radial basis function networks", in Mozer, M., Jordan, M. and Petsche, T., (Eds), Advances in Neural Information Processing Systems 9, MIT Press, Cambridge, pp. 543-549.
Ma, Y., Niyogi, P., Sapiro, G. and Vidal, R., (2011), "Dimensionality reduction via subspace and submanifold learning", IEEE Signal Processing Magazine, Vol. 28 no. 2, pp. 14-126.
Nadler, B., Lafon, S., Coifman, R.R. and Kevrekidis, I.G., (2005), "Diffusion maps, spectral clustering and reaction coordinates of dynamical systems", Applied & Computational Harmonic Analysis, Vol. 21 no. 1, pp. 113-127.
Pawliczek, P. and Dzwinel, W., (2010), "Parallel implementation of multidimensional scaling algorithm based on particle dynamics", in Wyrzykowski, R., Dongarra, J., Karczewski, K. and Wasniewski, J., (Eds), 8th International Conference, PPAM 2009, Springer, Wroclaw, pp. 312-321.
Rosman, G., Bronstein, A.M., Bronstein, M.M. and Kimmel, R., (2006), "Manifold analysis by topologically constrained isometric embedding", International Journal of Computer Vision, Vol. 89 no. 1, pp. 56-68.
Roweis, S.T. and Saul, L.K., (2000), "Nonlinear dimensionality reduction by locally linear embedding", Science, Vol. 290 no. 5500, pp. 2323-2326.
Sammon, J.W., (1969), "A nonlinear mapping for data structure analysis", IEEE Transactions on Computers, Vol. 18 no. 5, pp. 401-409.
Schölkopf, B., Platt, J. and Hofmann, T., (2006), "MLLE: modified locally linear embedding using multiple weights", in Schölkopf, P.B., Platt, J.C. and Hoffman, T., (Eds), Advances in Neural Information Processing Systems 19, (NIPS 2006), Neural Information Processing Systems Foundation Inc., Vancouver, pp. 1593-1600.
Shao, R., Hu, W., Wang, Y. and Qi, X., (2014), "The fault feature extraction and classification of gear using principal component analysis and kernel principal component analysis based on the wavelet packet transform", Measurement, Vol. 54 no. 8, pp. 118-132.
Silva, V.D. and Tenenbaum, J.B., (2003), "Global versus local methods in nonlinear dimensionality reduction", in Becker, S., Thrun, S. and Obermayer, K., (Eds), Advances in Neural Information Processing Systems, (NIPS 2002), Neural Information Processing Systems Foundation Inc., Vancouver, pp. 1959-1966.
Smith, M.J., Cesnik, C.E.S. and Hodges, D.H., (2000), "Evaluation of some data transfer algorithms for noncontiguous meshes", Journal of Aerospace Engineering, Vol. 13 no. 2, pp. 52-58.
Sun, W., Halevy, A., Benedetto, J.J., Czaja, W. and Liu, C., (2014), "UL-ISOMAP based nonlinear dimensionality reduction for hyperspectral imagery classification", ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 89 no. 2, pp. 25-36.
Tenenbaum, J.B., de Silva, V. and Langford, J.C., (2000), "A global geometric framework for nonlinear dimensionality reduction", Science, Vol. 290 no. 5500, pp. 2319-2322.
van der Maaten, L.J.P., Postma, E.O. and van den Herik, H.J., (2009), "Dimensionality reduction: a comparative review", Journal of Machine Learning Research, Vol. 10 no. 1, pp. 1-41.
Vanrell, M., VitriÀ, J. and Roca., X., (1997), "A multidimensional scaling approach to explore the behavior of a texture perception algorithm", Machine Vision and Applications, Vol. 9 Nos no. 5-6, pp. 262-271.
Wan, X., Wang, D., Tse, P.W., Xu, G. and Zhang, Q., (2016), "A critical study of different dimensionality reduction methods for gear crack degradation assessment under different operating conditions", Measurement, Vol. 78, pp. 138-150.
Wang, C., Krafft, P. and Mahadevan, S., (2011), "Manifold alignment: theory and applications", CRC Press.
Wang, G., Luo, J., He, Y. and Mahadevan, S., (2015), "Fault diagnosis of supervision and homogenization distance based on local linear embedding algorithm", Mathematical Problems in Engineering, Vol. 2015 no. 11, pp. 1-8.
Wang, J., (2008), "Improve local tangent space alignment using various dimensional local coordinates", Neurocomputing, Vol. 71 Nos no. 16-18, pp. 3575-3581.
Wang, J., He, H. and Prokhorov, D.V., (2012), "A folded neural network autoencoder for dimensionality reduction", Procedia Computer Science, Vol. 13 no. 6, pp. 120-127.
Wang, S., Yang, H. and Li, H., (2013), "Facial expression recognition based on incremental isomap with expression weighted distance", Journal of Computers, Vol. 8 no. 8, pp. 2051-2058.
Weinberger, K.Q. and Saul, L.K., (2006), "An introduction to nonlinear dimensionality reduction by maximum variance unfolding", in Hendler, J., (Ed.), AAAI Conference on Artificial Intelligence, AAAI Press, Boston, MA, pp. 1683-1686.
Zhong, M. and Qin, H., (2016), "Surface inpainting with sparsity constraints", Computer Aided Geometric Design, Vol. 41 no. 1, pp. 23-35.
Zong-Min, W.U., (2002), "Radial basis function scattered data interpolation and the meshless method of numerical solution of PDEs", Chinese Journal of Engineering Mathematics, Vol. 19 no. 2, pp. 1-12.
Further reading
Available at: https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction.
Available at: http://blog.sina.com.cn/s/blog_82a927880102v2ua.html.
Ming-min Liu: School of Mechatronic Engineering, North University of China , Taiyuan, China
L.Z. Li: School of Mechatronic Engineering, North University of China , Taiyuan, China
Jun Zhang: Department of Mathematics, North University of China , Taiyuan, China
© Emerald Publishing Limited 2017
