1. Introduction
In this paper, we introduce the hierarchical radial basis functions method for the approximation to Sobolev functions and the collocation to well-posed linear partial differential equations. Hierarchical basis may be a common concept in finite element books, which is formed by decomposing finite element spaces (see an earlier paper by Yserentant, in 1986 [1]). Hierarchical radial basis functions network is a concept in neural networks, which is an approximating neural model and is a self-organizing (by growing) multiscale version of a radial basis function network. In this paper, hierarchical radial basis functions (H-RBFs) refer to some basis functions that are constructed by employing hierarchical data structure and scaled compactly supported radial basis functions. Below, we can explain why hierarchical radial basis functions are necessary.
As the first kernel-based collocation method was introduced by Kansa [2], radial basis functions have been successfully applied to solving various partial differential equations numerically. However, unfortunately, we are obtaining highly accurate solutions from severely ill-conditioned linear systems and high computational cost when using radial basis functions collocation methods. This is a well-known uncertainty or trade-off principle [3]. To overcome this problem, different strategies have been suggested, see a summary of existing methods in the recent monograph [4]. There are about three kinds of popular methods to deal with this problem.
The first method is finding optimal shape parameter (which is related to the scattered centers distribution, and usually is inversely proportional to mesh norm). We have become accustomed to scaling one of radial functions through multiplying the independent variable by a shape parameter in the practical approximation. Obviously, a smaller value of causes the function to become flatter, whereas increasing leads to a more peaked radial function, and therefore localizes its influence. The choice of has a profound influence on both the approximation accuracy and numerical stability of the solution to interpolation problem. The general observation was that a large leads to a very well-conditioned linear system but also a poor approximation rate, whereas a smaller yields excellent approximation at the price of a badly conditioned system. A number of strategies for choosing a “good” value of have been suggested, such as the power function as indicator, the Cross Validation algorithm and the Contour–Padé algorithm. Some discussion of the existing parametrization schemes is provided in books [4,5]. Madych [6] indicated that the interpolation error goes to zero as , this does not seem to be true in practice. Beyond that, there is no case known where the error and the sensitivity are both reasonably small.
The second method is the utilization of compactly supported radial basis functions (CSRBFs). Since 1995, a series of compactly supported radial basis functions have emerged gradually, such as Wendland’s function [7], Wu’s function [8], Buhmann’s function [9], and others [5]. The compact support automatically ensures the strict positive definiteness of CSRBFs. However, an additional trade-off principle remains, now depending on the choice of the support size. Specifically, a small support leads to a well-conditioned system but also poor approximation accuracy, whereas a larger support yields excellent accuracy at the price of ill-conditioned system.
The third method is stationary multilevel interpolation. With a stationary multiscale algorithm [10], the condition number of the discrete matrix can be relatively small, and computation can be performed in operations. In this method, the present problem (interpolation or numerical PDEs) is solved first on the coarsest level by one of the compactly supported radial basis functions with a larger support (usually scaling the size of the support with the fill distance). Then, the residual can be formed, and then computed on the next finer level by the same compactly supported radial basis function but with a smaller support. This process can be repeated and be stopped on the finest level. The final approximation is the sum of all of interpolants. For interpolation problems, the linear convergence order has been proved in Sobelev spaces on the sphere in [11], and on bounded domains in [12]. Applications of this algorithm in solving PDEs on spheres are proposed in [13], on bounded domains were proposed in [14,15,16]. However, a series of global interpolation or approximation problems must be solved on different levels, although two kinds of local algorithms have been derived in [17,18] recently.
This paper considers solving the present problem on a single level. At the current scattered data set the trial function is represented as linear combination of hierarchical radial basis functions. The approximation method can produce a sparse discrete algebraic system, because hierarchical radial basis functions are derived from CSRBFs with different support radii. Compared with compactly supported radial basis functions approximation [7,8] and stationary multilevel approximation [11,12,13,14,15,16,17,18], the new method can solve the present problem on a single level with higher accuracy and lower computational cost. The effectiveness of H-RBFs collocation method will be conformed by several numerical observations.
2. H-RBFs Trial Spaces
In this section, we build H-RBFs trial spaces and assemble the discrete spaces with appropriate norms. In particular, we will prove two norm equivalence theorems and describe a function spaces commuting diagram. To avoid writing constants repeatedly, we shall use notations ≲ and ≅. Short notation means , and means , and and are positive constants.
2.1. H-RBFs Trial Spaces
Let be a finite point set in . We define a commonly used trial discretization parameter:
which can be regarded as the radius of the largest empty ball that can be placed among the data sites . To build H-RBFs trial spaces, we need the nested point sets which have trial discretization parameters . Let be newly added point set in , then we have and for .Given a kind of the compactly supported radial functions , we can rescale it (by a scaling parameter , and in the translation-invariant kernel-based case) as
(1)
To make the support radii of become smaller and smaller with the addition of new points, we select
(2)
Let , with and for . Then we can build the H-RBFs trial spaces of the form
which differs from the RBFs approximation spacesHere, we have written for .
2.2. Norms of the Discrete Spaces
The following is a relation between several function spaces.
Adding a restrictive condition (1), the Hilbert space becomes the reproducing kernel Hilbert space with reproducing . is a reproducing kernel Hilbert space with a strictly positive definite kernel . is the native space of , which contains all functions of the form provided , with
(3)
When , we get a characterization of in terms of the Fourier transforms
(4)
Here
In fact, there exist such radial functions whose Fourier transforms decay only algebraically, satisfying
(5)
Then, scaled radial functions have Fourier transforms satisfying
(6)
To clarify the relation between Sobolev spaces and native spaces, we cite the following extension operator theorems.
(Sobolev extension operator, see Section 5.17 in [19].)
Suppose is open and has a Lipschitz boundary. Let . Then. for all , there exists a linear operator, , such that
(1) ,
(2) .
(Native extension operator, see Section 10.7 in [20].)
Suppose Φ is a strictly positive definite kernel. Let . Then, each function has a natural extension to a function, , such that
(7)
The main difference between these two theorems is that the extensions for functions from Sobolev spaces impose a restriction on regions , while extensions from native spaces are adequate for more general regions.
Then, we have the first of the following norm equivalence theorems.
Suppose is a strictly positive definite kernel satisfying (5) with , and is defined by (1) with . Then we have , and for every , we have
Using norm definition of Sobolev space , definition (4), and inequalities (5) and (6), we have
By similar scaling method, the lower bound follows from
□A similar norm equivalence theorem with inverse of as scaling parameter can be found in [12]. If further assumptions are made on boundary of , the results of Theorem 3 will be still true for . Then, we have another norm equivalence theorem.
Suppose is a strictly positive definite kernel satisfying (5) with , and is defined by (1) with . If is open and has a Lipschitz boundary, then with equivalent norms, and, for every , we have
Every has an extension . By Theorem 3, . Therefore, we have and
In addition, every has an extension . Thus, and
□We can understand these two extension operator theorems (Theorems 1 and 2) and two norm equivalence theorems (Theorems 3 and 4) using a diagram. More concretely, under the following conditions,
(c1) is open and has a Lipschitz boundary
(c2) with
(c3)
(c4) , ,
Under conditions (c1)–(c4), , and we can assemble with norm.
3. Interpolation via H-RBFs
In this section we discuss the scattered data interpolation with hierarchical radial basis functions.
Given a target function , now we can find a interpolant of the form
(8)
The coefficients are found by enforcing the interpolation conditions
(9)
where we use as testing data on -th level. This may lead to an unsymmetric or even nonsquare discrete system. The linear system will be solved by least square method (it is a Matlab ’\’ operation in our experiments). Computations were performed on a laptop with 2.4 GHz Intel Core i7 processor, using MATLAB running in the Windows 7 operating system.First, we generate the nested spaced Halton points 9, 25, 81, 289, 1089, and 4225 in the interior of the domain (blue points in Figure 1). In Figure 1, we display six data sets used in the experiments. We choose all of blue points as the whole centers for the basis functions, and use Wendland’s compactly supported function to interpolate 2D Franke’s function. equally spaced evaluation points are used to compute RMS-error:
We now present four sets of interpolation experiments: nonstationary CSRBFs interpolation ( fixed), stationary CSRBFs interpolation (with an initial , then double its value for every successive level), multilevel interpolation, and H-RBFs interpolation. For the first three methods, we can let testing data be taken to be the same as the centers, because the discrete matrices produced by these methods are nonsingular under this choice. However, a nonsquare testing is necessary for H-RBFs interpolation method. To do this simply, we use Halton points to test (9) on each level. We list numerical results (including RMS-error, convergence rates, and total CPU time) in Table 1, Table 2, Table 3 and Table 4.
In the nonstationary case (Table 1), we have convergence, although it is not obvious what the rate might be. However, the computation requires lost of time because the matrices become increasingly dense. The stationary CSRBFs interpolation is also numerically stable, but there will be essentially no convergence (see Table 2). Table 3 is the numerical results for multilevel interpolation. The linear convergence behavior for Sobolev functions interpolation has been proved by Wendland in [12]. The corresponding 3D multilevel experiment can be found in Fasshauer’s book [5], and 2D uniformly data interpolation experiment is in [12]. From Table 3, we observe that the convergence seems cease at a later stage when given a initial . Compared with nonstationary CSRBFs interpolation, multilevel method saves much time but with limited accuracy.
The corresponding RMS-error and observed convergence rates for the H-RBFs method is listed in Table 4. Several observations can be made by looking at Table 4. The H-RBFs interpolation method is numerically stable and has relatively small errors, even for case. The RMS-error on 4225 level has been remarkably reduced to . Compared with CSRBFs and multilevel interpolation, the present H-RBF method can doubtlessly solve problem (9) with a higher accuracy and lower computational cost. The sparsity behavior of H-RBFs interpolation matrices are displayed in Figure 2.
4. Collocation via H-RBFs
In this section, we discuss the implementation of the H-RBFs collocation method for linear partial differential equations.
4.1. Example 1
We consider the following Poisson problem with Dirichlet boundary conditions,
(10)
where and . The exact solution of the problem (10) is given byWe use the unsymmetric Kansa method to solve problem (10). At this time, a nonsquare testing is necessary because the small trial space must be tested on a fine-grained space discretization according to Schaback’s theory [21,22]. As always, the interior collocation data are taken to be the same as the centers. We create additional equally spaced collocation points for the boundary conditions on each level (red points in Figure 1). That is, on each level, the centers only include blue points, whereas the collocation sites contain blue points in the interior of the domain and red points on the boundary. Consequently, the collocation matrix becomes nonsquare, because the test side has more degrees of freedom than trial side. In our experiments, Wendland’s function is used to construct trial spaces. We list RMS-error and total CPU time (last row in the Tables) for CSRBFs collocation method, multilevel collocation method and H-RBFs collocation method in Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10, respectively.
Table 5 is the nonstationary CSRBFs collocation with fixed scaling parameters of and . Table 6 is the stationary setting with two different initial parameter values, and , and then doubles its value for every successive level. We note that the nonconvergence behavior can be observed for the stationary collocation method. Similar observations are reported in detail in Fasshauer’s book [5], where the boundary centers were selected to coincide with the boundary collocation points.
Table 7 and Table 8 are numerical results for multilevel collocation method. Several observations can be made by looking at these two tables. We observe that multilevel collocation method is nonconvergent with some relatively large initial values, such as , , and . With an initial parameter , the method has convergence behavior. However, it seems that the convergence ceases at a later stage. But the multilevel collocation method will have linear convergence if we putting additional centers on boundary. The corresponding numerical experiments have been shown in book [5]. Generally, the bandwidth of the collocation matrix must be allowed to increase slowly from one level to the next (namely, the support radii go slower to zero than the mesh norms) when solving linear partial differential equations by multilevel collocation method. In fact, these phenomena have been noted in Fasshauer’s numerical observations in [23] and theoretical explanations in [16]. Through above experiments (Table 7 and Table 8), the conclusion that we need to add is that the multilevel collocation method is terrible if the centers just being placed in the interior of the domain.
In Table 9 and Table 10, we list RMS-error and CPU time for the hierarchical radial basis functions collocation method. We observe that H-RBFs method has an ideal convergence behavior. Even to a relatively large initial parameter , the convergence rate of H-RBFs method is close to 2. With an initial parameter , RMS-error on 4225 level is remarkably reduced to , whereas the CPU consumption time is only ~ s. Compared with CSRBFs and multilevel collocation method, the present H-RBFs method can solve the model problem with a higher accuracy and lower computational cost. The sparsity behavior of H-RBFs collocation matrices are displayed in Figure 3.
4.2. Example 2
In this subsection, we consider the following Helmholtz test problem,
(11)
where denotes the unit outer normal vector. It is easy to verify that the exact solution for the problem (11) is given byFor this example, we use the same centers data, collocation sites, and Wendland’s function as in Section 4.1. The RMS-error, total CPU time, and observed convergence rates are listed in Table 11 and Table 12, and the sparsity behavior of the H-RBF collocation matrices of this problem are displayed in Figure 4. We observe that it seems that the convergence ceases at a later stage when using a relatively large initial value . However, the H-RBF method has ideal convergence behavior and lower computational cost, as in example 1. We also display the plots of the absolute error in Figure 5.
5. Conclusions
To handle the long-standing trade-off principle in the RBF collocation method, we derived hierarchical radial basis functions (H-RBFs) collocation method in the paper. Based on a nested scattered point sets, H-RBF trial spaces are constructed using scaled compactly supported radial basis functions with varying support radii. Several numerical observations were demonstrated in the paper. The experiments showed that H-RBFs collocation method can retain high accuracy with a lower computational cost, when compared with existing CSRBFs collocation method and multilevel RBFs collocation method.
There are many possibilities for enhancement of this method:
(1). A convergence proof for H-RBFs collocation method will depend on the approximation of H-RBFs trial spaces, new inverse inequality (a frequently used inequality has been given in [24] for RBFs case), and sampling theorem.
(2). This method can be used for solving well-posed nonlinear partial differential equations, and the convergence analysis of hierarchical radial basis function collocation method for nonlinear discretization will depend on Bhmer/Schaback theory [25].
Author Contributions
Conceptualization and Formal analysis, Z.L.; Methodology and Writing–original draft preparation, Q.X.
Funding
The research of the first author was partially supported by the Natural Science Foundations of Ningxia Province (No.2019AAC02001), the Natural Science Foundations of China (No.11501313), the Project funded by China Postdoctoral Science Foundation (No.2017M621343), and the Third Batch of Ningxia Youth Talents Supporting Program (No.TJGC2018037). Research of the second author was partially supported by the Natural Science Foundations of Ningxia Province (No.NZ2018AAC03026), and the Fourth Batch of Ningxia Youth Talents Supporting Program.
Acknowledgments
The authors are extremely grateful to Zongmin Wu from Fudan University, and would like to thank two unknown reviewers who made valuable comments on an earlier version of this paper.
Conflicts of Interest
The authors declare no conflicts of interest.
Figures and Tables
Figure 1. Six scattered data sets used in experiments: N interior Halton points (blue) and 4(N−1) equally spaced boundary collocation points (red).
Figure 2. Sparsity behavior of H-RBF interpolation matrices at level 1, 2, 3, 4, 5, and 6.
Figure 3. Possion problem: Sparsity behavior of discrete matrices by H-RBFs collocation at levels 1, 2, 3, 4, 5, and 6.
Figure 4. Helmholtz problem: Sparsity behavior of discrete matrices by H-RBFs collocation at levels 1, 2, 3, 4, 5, and 6.
Figure 5. Final error graphs on level 6. Top left to bottom right: ε = 0.5, 0.25, 0.125, and 0.0625.
Compactly supported radial basis function (CSRBF) interpolation: nonstationary.
Centers | Rate | Rate | ||
---|---|---|---|---|
9 | 1.113967 | 1.537637 | ||
25 | 6.950489 | 0.680520 | 8.362841 | 0.878650 |
81 | 7.783806 | 3.158567 | 8.430085 | 3.310374 |
289 | 4.849611 | 4.004535 | 3.210030 | 4.714889 |
1089 | 2.253331 | 4.427738 | 2.115728 | 3.923360 |
4225 | 1.128459 | 4.319634 | 3.572115 | 2.566304 |
4.42e+01(s) | 4.42e+01(s) |
CSRBF interpolation: stationary.
Centers | Rate | Rate | ||
---|---|---|---|---|
9 | 1.113967 | 1.537637 | ||
25 | 4.205301 | 1.405425 | 6.950489 | 1.145528 |
81 | 2.288906 | 0.877551 | 9.036469 | 2.943283 |
289 | 2.330558 | −0.026018 | 5.143627 | 0.812973 |
1089 | 1.986107 | 0.230732 | 4.590088 | 0.164264 |
4225 | 2.010532 | −0.017633 | 4.395273 | 0.062569 |
1.46e+00(s) | 3.16e+00(s) |
Multilevel interpolation.
Centers | Initial |
Rate | Initial |
Rate |
---|---|---|---|---|
9 | 1.113967 | 1.537637 | ||
25 | 5.321739 | 1.065737 | 7.271406 | 1.080408 |
81 | 8.346277 | 2.672693 | 5.457674 | 3.735876 |
289 | 3.142086 | 1.409410 | 9.231828 | 2.563598 |
1089 | 1.646392 | 0.932414 | 2.752895 | 1.745667 |
4225 | 9.882529 | 0.736356 | 1.053762 | 1.385401 |
2.08e+00(s) | 5.53e+00(s) |
Hierarchical radial basis function (H-RBF) interpolation.
Centers | Initial |
Rate | Initial |
Rate |
---|---|---|---|---|
9 | 8.715764 | 9.975067 | ||
25 | 4.380040 | 0.992683 | 5.449353 | 0.872242 |
81 | 8.765413 | 2.321050 | 2.426089 | 1.167453 |
289 | 1.146738 | 2.934287 | 8.471348 | 4.839897 |
1089 | 8.960631 | 3.677791 | 8.592933 | 6.623297 |
4225 | 9.886575 | 3.180058 | 1.835490 | 2.226986 |
4.42e+00(s) | 9.97e+00(s) |
CSRBFs collocation: nonstationary.
Centers | Rate | Rate | ||
---|---|---|---|---|
9 | 5.154990 | 2.312114 | ||
25 | 2.523773 | 1.030387 | 2.501198 | 3.208521 |
81 | 5.129004 | 2.298832 | 8.196424 | 1.609552 |
289 | 1.169028 | 2.133369 | 1.976536 | 2.052021 |
1089 | 9.400882 | 3.636370 | 2.150290 | 3.200371 |
4225 | 7.877153 | 3.577050 | 4.789840 | 2.166482 |
5.61e+01(s) | 5.59e+01(s) |
CSRBFs collocation: stationary.
Centers | Initial |
Rate | Initial |
Rate |
---|---|---|---|---|
9 | 5.154990 | 2.312114 | ||
25 | 1.880555 | 1.454811 | 2.523773 | −0.126370 |
81 | 1.845714 | 0.026979 | 2.415339 | 0.063357 |
289 | 3.232111 | −0.808298 | 1.619474 | 0.576701 |
1089 | 4.385499 | −0.440264 | 2.008842 | −0.310839 |
4225 | 4.782622 | −0.125061 | 2.480243 | −0.304117 |
2.11e+00(s) | 6.14e+00(s) |
Multilevel collocation: with initial 0.5 and 0.25.
Centers | Initial |
Rate | Initial |
Rate |
---|---|---|---|---|
9 | 5.154990 | 2.312114 | ||
25 | 5.196034 | −0.011441 | 1.827791 | 0.339111 |
81 | 5.185346 | 0.002970 | 1.886535 | −0.045637 |
289 | 5.185903 | −0.000155 | 1.890747 | −0.003218 |
1089 | 5.185993 | −0.000025 | 1.891628 | −0.000672 |
4225 | 5.186014 | −0.000006 | 1.891758 | −0.000099 |
3.19e+00(s) | 9.47e+00(s) |
Multilevel collocation: with initial 0.125 and 0.0625.
Centers | Initial |
Rate | Initial |
Rate |
---|---|---|---|---|
9 | 3.345335 | 2.877511 | ||
25 | 1.189874 | 1.491342 | 4.878530 | 2.560303 |
81 | 1.238221 | −0.057460 | 2.944292 | 0.728526 |
289 | 1.166663 | 0.085881 | 1.706502 | 0.786879 |
1089 | 1.158203 | 0.010500 | 7.585827 | 1.169664 |
4225 | 1.159396 | −0.001485 | 6.469642 | 0.229621 |
2.03e+01(s) | 4.21e+01(s) |
H-RBFs collocation of example 1: with initial 0.5 and 0.25.
Centers | Initial |
Rate | Initial |
Rate |
---|---|---|---|---|
9 | 5.154990 | 2.312114 | ||
25 | 1.200069 | 2.102853 | 4.838983 | 2.256436 |
81 | 2.112357 | 2.506192 | 3.451755 | 3.809302 |
289 | 7.398191 | 1.513609 | 1.075014 | 5.004903 |
1089 | 2.440784 | 1.599828 | 5.835322 | 4.203399 |
4225 | 5.606710 | 2.122118 | 2.751613 | 4.406463 |
4.07e+00(s) | 9.69e+00(s) |
H-RBFs collocation of example 1: with initial 0.125 and 0.0625.
Centers | Initial |
Rate | Initial |
Rate |
---|---|---|---|---|
9 | 3.345335 | 2.877511 | ||
25 | 1.060521 | 4.979305 | 6.205372 | 5.535160 |
81 | 1.898035 | 2.482195 | 1.655041 | 1.906651 |
289 | 2.565084 | 6.209357 | 2.628040 | 5.976736 |
1089 | 6.264679 | 5.355622 | 4.950454 | 5.730282 |
4225 | 2.553269 | 4.616823 | 1.102677 | 5.488478 |
1.97e+01(s) | 3.76e+01(s) |
H-RBFs collocation of example 2: with initial 0.5 and 0.25.
Centers | Initial |
Rate | Initial |
Rate |
---|---|---|---|---|
9 | 3.117164 | 1.719666 | ||
25 | 2.035525 | 0.614833 | 8.951139 | 0.941985 |
81 | 1.837566 | 0.147605 | 8.238622 | 3.441596 |
289 | 1.749727 | 0.070666 | 3.459194 | 1.251967 |
1089 | 1.744016 | 0.004717 | 1.833905 | 0.915517 |
4225 | 1.787914 | −0.035864 | 1.024535 | 4.161878 |
1.94e+00(s) | 5.48e+00(s) |
H-RBFs collocation of example 2: with initial 0.125 and 0.0625.
Centers | Initial |
Rate | Initial |
Rate |
---|---|---|---|---|
9 | 5.473827 | 3.289428 | ||
25 | 1.411404 | 5.277347 | 6.743427 | 2.286283 |
81 | 2.332565 | 2.597141 | 2.429136 | 4.794967 |
289 | 4.059524 | 2.522535 | 2.783964 | 3.125231 |
1089 | 1.776398 | 4.514284 | 1.239397 | 7.811358 |
4225 | 5.102425 | 5.121628 | 3.532572 | 5.132776 |
1.25e+01(s) | 2.62e+01(s) |
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019 by the authors.
Abstract
In this paper, we derive and discuss the hierarchical radial basis functions method for the approximation to Sobolev functions and the collocation to well-posed linear partial differential equations. Similar to multilevel splitting of finite element spaces, the hierarchical radial basis functions are constructed by employing successive refinement scattered data sets and scaled compactly supported radial basis functions with varying support radii. Compared with the compactly supported radial basis functions approximation and stationary multilevel approximation, the new method can not only solve the present problem on a single level with higher accuracy and lower computational cost, but also produce a highly sparse discrete algebraic system. These observations are obtained by taking the direct approach of numerical experimentation.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 School of Mathematical Sciences, Fudan University, Shanghai 200433, China; school of Mathematics and Statistics, Ningxia University, Yinchuan 750021, China;
2 school of Mathematics and Statistics, Ningxia University, Yinchuan 750021, China;