(ProQuest: ... denotes non-US-ASCII text omitted.)
Recommended by Norimichi Hirano
Department of Mathematics, Fu Jen Catholic University, New Taipei City 24205, Taiwan
Received 2 July 2012; Accepted 2 September 2012
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
The theory of Gamma-convergence was introduced by De Giorgi in the 1970s. It has become both a standard criterion for the study of variational problems and one of the extremely important topics in the calculus of variations. Gradient flows which are defined on metric spaces were also noticed by De Giorgi, that is, the notion replacing "gradient flows in a differentiable structure" then that of "curve of maximal slope in metric space." This notion was introduced in [ 1] then further developed in [ 2- 5]. It turns out to be useful in many applications, in particular for defining gradient flows over the probability measure spaces equipped with the Wasserstein metric.
In 2004, Gamma-convergence of gradient flows on Hilbert spaces was introduced by Sandier and Serfaty in [ 6]. This abstract method states that if a family of energy functionals { J [varepsilon] } [varepsilon] >0 Γ -converges to a limiting functional J , then, under suitable conditions, solutions of the gradient flow of J [varepsilon] converge to solutions of the gradient flow of J . This scheme was used successfully for the dynamics of Ginzburg-Landau vortices (cf. [ 6]), the Cahn-Hilliard equation (cf. [ 7, 8]), and the Allen-Cahn equation (cf. [ 9]).
The notion of Gamma-convergence of gradient flows on metric spaces was initiated by Serfaty [ 9] in 2011. She presented and proved the following.
Proposition 1.1 (cf. [ 9] Gamma-convergence of gradient flows in the metric spaces setting).
Let ( X [varepsilon] , d [varepsilon] ) and ( X ,d ) be complete metric spaces. Let Φ [varepsilon] and Φ be functionals defined on metric spaces ( X [varepsilon] , d [varepsilon] ) and ( X ,d ) , respectively. Assume that there is a sense of convergence ...AE; of u [varepsilon] ∈ X [varepsilon] to u ∈X which can be general and with respect to the Γ -liminf convergence of Φ [varepsilon] to Φ : [figure omitted; refer to PDF] Let g [varepsilon] and g be strong upper gradients of Φ [varepsilon] and Φ , respectively. Assume in addition the following relations.
(1) Lower bound on the metric derivatives: if u [varepsilon] ( t ) ... ...AE; u ( t ) , for t ∈ [ 0 ,T ) then for all s ∈ [ 0 ,T ) [figure omitted; refer to PDF]
(2) Lower bound on the slopes: if u [varepsilon] ... ...AE; u , then [figure omitted; refer to PDF]
Let u [varepsilon] ( t ) be a p -curve of maximal slope on ( 0 ,T ) for Φ [varepsilon] with respect to g [varepsilon] , such that u [varepsilon] ( t ) ... ...AE; u ( t ) , which is well prepared in the sense that [figure omitted; refer to PDF] Then u is a p -curve of maximal slope with respect to g and [figure omitted; refer to PDF]
Obviously, this scheme (Proposition 1.1.) can be applied to single gradient flow problems only. To the best of our knowledge, nonlinear equations are more difficult than linear equations, the problem of system of differential equations is more complicated and more important than the problem of scalar equations, and the system of nonlinear gradient flows on metric spaces has not appeared elsewhere. This gives us a motivation for studying the systems of nonlinear gradient flows on metric spaces and then establishing its Gamma-convergence structure which can be applied to the problems of nonlinear gradient flow systems on metric spaces. This scheme can be regarded as a "nonlinear system" edition of the notion initiated by Sylvia Serfaty.
This paper is organized as follows. In Section 2, we introduce some necessary knowledge on gradient flows, basic definitions of absolutely continuous curve, metric derivative, strong upper gradient, and curve of maximal slope for functional. In Section 3, we establish the explicit structure of nonlinear gradient flow systems and Gamma-convergence of the systems of nonlinear gradient flows on metric spaces. Finally, we give two examples to illustrate a special case of our main results in Section 4.
2. Basic Definitions and Preliminaries
Let ( X , Y9; · , · YA; ) be a real Hilbert space with the corresponding norm || · || and let E :X [arrow right] ... be a functional defined on X . We say that E is Fréchet differentiable at x ∈X if there exists x * ∈ X * ...1; [Lagrangian (script capital L)] ( X ; ... ) (the space of all bounded linear functionals on X ) such that [figure omitted; refer to PDF] where lim || h || [arrow right]0 ( o ( || h || ) / || h || ) =0 .
Note that if such an x * exists, then it is unique and we denote DE ( x ) ...1; x * . In view of the Riesz representation theorem there exists a unique element y ∈X such that [figure omitted; refer to PDF] Moreover || x * || X * = || y || . DE ( x ) is called the differentiable of E at x (notice that it is a bounded linear functional on X ). We denote ∇ X E ( x ) ...1;y and we call ∇ X E ( x ) the gradient of E at x . Hence we have [figure omitted; refer to PDF] We say that E is of class C 1 on X (i.e., E ∈ C 1 ( X ; ... ) ) if the map x [arrow right]DE ( x ) is continuous on X . If E ∈ C 1 ( X ; ... ) , then the directional derivative of E at u ( ∈X ) in direction [straight phi] exists and is given by [figure omitted; refer to PDF] Let γ : ... [arrow right]X be a differentiable curve in X with γ ( t 0 ) =x ∈X . Then [figure omitted; refer to PDF] The evolution equation [figure omitted; refer to PDF] is called the gradient flow of E on Hilbert space (X , || · || X ) . Using the definitions of Hilbert spaces and Gradient flows, it is easy to show the following basic and useful lemma.
Lemma 2.1.
Suppose that for each 1 ...4;i ...4;n , ( Y i , Y9; · , · YA; Y i ) is an inner product space with induced norm || · || Y i . Let one defines [figure omitted; refer to PDF] for each x = ( x 1 , ... , x n ) , y = ( y 1 , ... , y n ) in Y ...1; Y 1 × ... × Y n . Then one has the following.
(i) We can see that (Y , Y9; · , · YA; Y ) is an inner product space with induced norm [figure omitted; refer to PDF]
: Thus, ( Y , || · || Y ) is a Hilbert space.
(ii) Let F be a C 1 functional defined on Y ...1; Y 1 × ... × Y n and let x = ( x 1 , ... , x n ) ∈Y . An element y = ( y 1 , ... , y n ) ∈Y (and denote y = ∇ Y F ( x ) ) is called the gradient of F at x on the Hilbert space Y if for every differentiable curve γ ( t ) = ( γ 1 ( t ) , ... , γ n ( t ) ) in Y satisfying γ ( t 0 ) =x , [figure omitted; refer to PDF]
: Let one denotes [figure omitted; refer to PDF]
: ∇ Y i F ( x ) is called the gradient of F at x with respect to Y i . Hence one has [figure omitted; refer to PDF]
: The evolution equation (the gradient flow of F on ( Y , || · || Y ) ) [figure omitted; refer to PDF]
: can be expressed as the following system of gradient flows on Hilbert spaces: [figure omitted; refer to PDF]
Definition 2.2 ( p -absolutely continuous curve).
Let ( X ,d ) be a complete metric space equipped with the distance d . A mapping v : ( a ,b ) [arrow right]X is called a p -absolutely continuous curve or belongs to A C p ( a ,b ;X ) , p ...5;1 , if there exists an L p ( a ,b ) function m such that [figure omitted; refer to PDF]
Proposition 2.3 (cf. [ 5]).
Let ( X ,d ) be a metric space and let u : [ a ,b ] [arrow right]X . Then
(i) u ∈AC ( [a ,b ] ;X ) if and only if there exists m ∈ L 1 (a ,b ) , m ...5;0 , such that [figure omitted; refer to PDF]
(ii) If u ∈AC ( [ a ,b ] ;X ) , the metric derivative [figure omitted; refer to PDF]
: exists for a.e. t ∈ ( a ,b ) , | u [variant prime] | d ∈ L 1 ( a ,b ) , [figure omitted; refer to PDF]
: and if m satisfies ( 2.15), then | u [variant prime] | d ...4;m a.e. on ( a ,b ) .
In the following, let us give a motivation for defining "gradient flow" on metric spaces. Here we completely follow the nice contents of Section 1.3 in [ 5]. Note that every solution u of the gradient flow [figure omitted; refer to PDF] can be characterized by the following the scalar equations: [figure omitted; refer to PDF] Using Young's inequality, ( 2.19) and ( 2.20) are equivalent to [figure omitted; refer to PDF] We can impose ( 2.18), ( 2.19), ( 2.20), and ( 2.21) as a system of differential inequalities in the couple ( u ,g ) by using the following strategies.
(i) The function g is an upper bound for the modulus of the gradient [figure omitted; refer to PDF]
: for every regular curve v : ( 0 , + ∞ ) ...X .
(ii) Impose that the functional E decreasing along u as much as possible compatibly with ( 2.22), that is, [figure omitted; refer to PDF]
(iii): Prescribe the dependence of || u [variant prime] || on g ( u ) , [figure omitted; refer to PDF]
: or even in a single formula [figure omitted; refer to PDF]
Whereas ( 2.18), ( 2.19), and ( 2.20) make sense only in a Hilbert space framework, the formulas ( 2.21)~( 2.25) are of purely metric nature and can be extended to more general metric space, provided we understand || u [variant prime] || as the metric derivative of u , | u [variant prime] | d . Of course, the concept of upper gradient provides only an upper estimate for the modulus of ∇ X E in the regular case, but it is enough to define steepest descent curves, that is, curves which realize the minimal selection of d / dt ( E [composite function]u ) ( t ) compatible with [figure omitted; refer to PDF] for a.e. t ∈ (0 , + ∞ ) .
Suppose that u ∈AC ( a ,b ;X ) , and g [composite function]u is Borel. Using ( 2.26), we have for any a <s <t <b , [figure omitted; refer to PDF] Therefore we say that g is a strong upper gradient for E if for each u ∈AC ( a ,b ;X ) , g [composite function]u is Borel and ( 2.27) holds for all a <s <t <b . Using the ideas (ii) and (iii) and Young's inequality, we say that a locally absolutely continuous function u : ( a ,b ) ...X is a curve of maximal slope for E with respect to its strong upper gradient g if E [composite function]u is a.e. equal to a nonincreasing map [straight phi] and [figure omitted; refer to PDF] for a.e. t ∈ ( a ,b ) . Let us present the main definitions and three lemmas as follows.
Definition 2.4 (strong upper gradient).
Suppose that ( X i , d i ) is a complete metric space equipped with the distance d i for i =1 , ... ,n . Let g i : X 1 × ... × X n [arrow right] [ 0 , + ∞ ] for each i with 1 ...4;i ...4;n . We say that g = ( g 1 , ... , g n ) is a strong upper gradient for Φ : X 1 × ... × X n [arrow right] ... if for each ( u 1 , ... , u n ) ∈AC (a ,b ; X 1 × ... × X n ) , g i [composite function] ( u 1 , ... , u n ) is Borel for i =1 , ... ,n , and [figure omitted; refer to PDF]
Definition 2.5 (a pair of Young's functions).
Suppose that F * and G * : [ 0 , + ∞ ) [arrow right] [ 0 , + ∞ ) are two differentiable functions. We say that ( F * , G * ) is a pair of Young's functions if they satisfy
(1) Young' s inequality: st ...4; F * ( t ) + G * ( s ) for all s ,t ...5;0 ,
(2) Young' s equality: st = F * ( t ) + G * ( s ) ...s = f * ( t ) or t = g * (s ) ,
where f * = ( F * ) ' and g * = ( G * ) ' .
Definition 2.6 (curve of maximal slope).
Let ( F i * , G i * ) be a pair of Young's functions for each i with 1 ...4;i ...4;n . We say that ( u 1 , ... , u n ) : ( a ,b ) [arrow right] X 1 × ... × X n is a ( F 1 * , ... , F n * ) -curve of maximal slope for the functional Φ with respect to the strong upper gradient g = ( g 1 , ... , g n ) if Φ ( u 1 , ... , u n ) is L 1 -a.e. equal to a nonincreasing map [straight phi] and [figure omitted; refer to PDF] for L 1 -a.e. t ∈ (a ,b ) .
Definition 2.7 ( Γ -liminf convergence).
Let ( X i , [varepsilon] , d i , [varepsilon] ) and ( X i , d i ) be complete metric spaces for all 1 ...4;i ...4;n and [varepsilon] >0 . Suppose that Φ [varepsilon] : X 1 , [varepsilon] × ... × X n , [varepsilon] ... ( - ∞ , + ∞ ] and Φ : X 1 × ... × X n ... ( - ∞ , + ∞ ] are functionals defined on X 1 , [varepsilon] × ... × X n , [varepsilon] and X 1 × ... × X n , respectively. We say that Φ [varepsilon] Γ -liminf converges to Φ if [figure omitted; refer to PDF] then [figure omitted; refer to PDF] where ( u 1 , [varepsilon] , ... , u n , [varepsilon] ) ∈ X 1 , [varepsilon] × ... × X n , [varepsilon] , ( u 1 , ... , u n ) ∈ X 1 × ... × X n , and the sense of convergence ...AE; can be general.
Lemma 2.8.
Suppose that α ...5;0 and a n ...5;0 for each n ∈ ... . If liminf n [arrow right] ∞ a n ...5; α and if f is a continuous nondecreasing function on [0 , + ∞ ) , then [figure omitted; refer to PDF]
Proof.
(i) For each k ∈ ... , a k - ...1; inf n ...5;k a n ...4; a n , ∀n ...5;k .
(ii) Since f is non-decreasing on [0 , ∞ ) , and using (i), we have [figure omitted; refer to PDF] Therefore, [figure omitted; refer to PDF] and so [figure omitted; refer to PDF] (iii) Since f is continuous on [0 , ∞ ) , and using (ii), we have [figure omitted; refer to PDF]
Lemma 2.9 (see [ 10, Theorem 5.11]).
Let f be nonnegative and measurable on E . Then ∫ E ...fdx =0 if and only if f =0 a.e. in E .
Lemma 2.10.
Let ( X i , d i ) be a metric space for each 1 ...4;i ...4;n . Let X ...1; X 1 × ... × X n = { ( x 1 , ... , x n ) |" x i ∈ X i for 1 ...4;i ...4;n } . Define the function d :X ×X ... ... by [figure omitted; refer to PDF] for each x = ( x 1 , ... , x n ) , y = ( y 1 , ... , y n ) ∈X . Then
(i) ( X ,d ) is a metric space,
(ii) suppose that v = ( v 1 , ... , v n ) ∈AC ( a ,b ;X ) . The metric derivative | v [variant prime] | d can be expressed as [figure omitted; refer to PDF]
: for a.e. t ∈ ( a ,b ) .
3. Main Results
In the following theorem we introduce the systems of explicit nonlinear gradient flows of energy functional with respect to the strong upper gradient on metric spaces and investigate an upper control for some form of velocity of solutions by its dissipation rate of the energy functional. Using this idea, we can see that if motion is driven by energy dissipation and if there are solutions that move without losing much energy, then they must move very slowly for each component solution.
Theorem 3.1.
Let ( X i , d i ) be a complete metric spaces equipped with distance d i for i =1,2 , ... ,n . Let Φ be a functional defined on X 1 × X 2 × ... × X n and let g = ( g 1 , ... , g n ) be a strong upper gradient for Φ . Assume that f i : [0 , + ∞ ) ... [0 , + ∞ ) is a continuous, strictly increasing and surjective function for each 1 ...4;i ...4;n . Let F i and G i be defined by [figure omitted; refer to PDF] for each t ...5;0 and 1 ...4;i ...4;n . Suppose that ( u 1 , , ... , u n ) ∈AC (a ,b ; X 1 × ... × X n ) and that is a ( F 1 , ... , F n ) -curve of maximal slope for the functional Φ with respect to the strong upper gradient g on (a ,b ) ⊂ ... . Then one has the following.
(i) [figure omitted; refer to PDF]
: for a.e. a <s ...4;t <b .
(ii) ( u 1 , ... , u n ) satisfies the system of explicit nonlinear gradient flows of Φ with respect to the structure ( ( f 1 , ... , f n ) , ( g 1 , ... , g n ) , X 1 × ... × X n ) [figure omitted; refer to PDF]
: for a.e. t ∈ ( a ,b ) .
(iii): Assume that function ψ i ( t ) ...1; f i ( t ) ·t is convex and strictly increasing on [ 0 , + ∞ ) for each 1 ...4;i ...4;n , then one has
(a) [figure omitted; refer to PDF] for all a <s <t <b ,
(b) [figure omitted; refer to PDF] for a.e. t ∈ (a ,b ) .
(iv) If f i (t ) =t for 1 ...4;i ...4;n , then (ii) can be expressed as [figure omitted; refer to PDF]
: which is the system of explicit linear gradient flows of Φ with respect to the structure ( ( f 1 , ... , f n ) , ( g 1 , ... , g n ) , X 1 × ... × X n ) .
Proof.
According to the definitions of F i and G i , ( F i , G i ) is a pair of Young's functions. Recall we assume that ( u 1 , ... , u n ) is a ( F 1 , ... , F n ) -curve of maximal slope for Φ with respect to g on ( a ,b ) . For a.e. a <s <t <b , we have [figure omitted; refer to PDF] The last inequality holds due to the assumption of strong upper gradient for g . Thus we easily obtain the following formula: [figure omitted; refer to PDF] for L 1 a.e. a <s <t <b . Using the Young's inequality and the Vanishing theorem (Lemma 2.9), we obtain [figure omitted; refer to PDF] for a.e. a < τ <b and for 1 ...4;i ...4;n . By Young's equality, we discover that the system of explicit gradient flows of Φ with respect to the structure ( ( f 1 , ... , f n ) ,g , X 1 × ... × X n ) holds.
We now prove assertion (iii). By using assertions (i), (ii), and the assumption for ψ i , we see that [figure omitted; refer to PDF] for a.e. a <s <t <b . Owing to the metric derivative | u i [variant prime] | d i which is the smallest admissible function m i satisfying [figure omitted; refer to PDF] and ψ i is convex and strictly increasing on [0 , + ∞ ) , by using Jensen's inequality, we find [figure omitted; refer to PDF] This completes the proof of (a). By passing to the limit s [arrow right]t in (a), we deduce that (b) holds.
In our second main result, we study and establish the abstract structure of "Gamma-convergence of gradient flow systems on metric spaces" which is a nonlinear system edition of the notion (Proposition 1.1) and which can be applied to the problems involving a system of nonlinear gradient flows on metric spaces.
Theorem 3.2 (Gamma-convergence of systems of gradient flows on metric spaces).
Let ( X i , [varepsilon] , d i , [varepsilon] ) and ( X i , d i ) be complete metric spaces for all 1 ...4;i ...4;n and [varepsilon] >0 . Let Φ [varepsilon] and Φ be functionals defined on spaces X 1 , [varepsilon] × ... × X n , [varepsilon] and X 1 × ... × X n , respectively. Suppose that the Γ - lim inf convergence of Φ [varepsilon] to Φ holds. Let g [varepsilon] = ( g 1 , [varepsilon] , ... , g n , [varepsilon] ) and g = ( g 1 , ... , g n ) be strong upper gradients of Φ [varepsilon] and Φ , respectively. Let ( F i * , G i * ) be a pair of Young's functions having continuous, strictly increasing and surjective derivative ( F i * ) [variant prime] ( = f i * ) for i =1 , ... ,n . Assume in addition the following relations.
(1) Lower bound on the metric derivatives: if ( u 1 , [varepsilon] , ... , u n , [varepsilon] ) ( t ) ... ...AE; ( u 1 , ... , u n ) ( t ) , for t ∈ [0 ,T ) then for all s ∈ [ 0 ,T ) [figure omitted; refer to PDF]
(2) Lower bound on the strong upper gradients: If ( u 1 , [varepsilon] , ... , u n , [varepsilon] ) ... ...AE; ( u 1 , ... , u n ) , then [figure omitted; refer to PDF]
Let ( u 1 , [varepsilon] , ... , u n , [varepsilon] ) be a ( F 1 * , ... , F n * ) -curve of maximal slope on (0 ,T ) for Φ [varepsilon] with respect to g [varepsilon] = ( g 1 , [varepsilon] , ... , g n , [varepsilon] ) such that ( u 1 , [varepsilon] , ... , u n , [varepsilon] ) ( t ) ... ...AE; ( u 1 , ... , u n ) ( t ) , which is well-prepared in the sense that [figure omitted; refer to PDF] Then
(i) ( u 1 , ... , u n ) is a ( F 1 * , ... , F n * ) -curve of maximal slope on ( 0 ,T ) for Φ with respect to g = ( g 1 , ... , g n ) ,
(ii) [figure omitted; refer to PDF]
(iii): [figure omitted; refer to PDF]
(iv) [figure omitted; refer to PDF]
: where g i * = ( f i * ) -1 .
Proof.
Owing to the fact that ( u 1 , [varepsilon] , ... , u n , [varepsilon] ) is a ( F 1 * , ... , F n * ) -curve of maximal slope for Φ [varepsilon] with respect to g [varepsilon] on [ 0 ,T ) , we recall that Theorem 3.1-(i) and (ii) yields [figure omitted; refer to PDF] for a.e. 0 <s <T ; [figure omitted; refer to PDF] for a.e. t ∈ [ 0 ,T ) and for 1 ...4;i ...4;n .
Passing to the liminf [varepsilon] [arrow right]0 to ( 3.19) and applying Fatou's lemma, we deduce that [figure omitted; refer to PDF] The last inequality is achieved by the assumptions of (1) and (2) as well as Lemma 2.8.
Using the fact that each ( F i * , G i * ) is a pair of Young's functions (hence Young's inequality holds), ( 3.21), and the strong upper gradient assumption of g for Φ , we can check that [figure omitted; refer to PDF] We recall that ( u 1 , [varepsilon] , ... , u n , [varepsilon] ) is well prepared, and using ( 3.22), we can deduce that [figure omitted; refer to PDF]
Using the Γ -liminf convergence of Φ [varepsilon] to Φ and ( 3.23), we obtain [figure omitted; refer to PDF]
Combining ( 3.24) with ( 3.22), we can now conclude that, for each s ∈ [0 ,T ) , [figure omitted; refer to PDF] Using the Young's inequality and the Vanishing theorem (Lemma 2.9) again, we conclude that, for a.e. t ∈ [0 ,T ) , [figure omitted; refer to PDF] for all 1 ...4;i ...4;n . Moreover, by Young's equality, we have, for a.e. t ∈ [0 ,T ) , [figure omitted; refer to PDF] for each 1 ...4;i ...4;n .
Next, differentiating formulas ( 3.25) with respect to variable s , we see that ( u 1 , ... , u n ) is a ( F 1 * , ... , F n * ) -curve of maximal slope for Φ with respect to g on [0 ,T ) . Using formula ( 3.19), ( 3.24), and ( 3.25) we can check that [figure omitted; refer to PDF] for all s ∈ [0 ,T ) . Finally, we recall formulas ( 3.20) and ( 3.27), and using ( 3.28), we obtain [figure omitted; refer to PDF] and complete the proof of Theorem 3.2.
4. Examples
In this section, we present two examples to illustrate a special case of our main results.
Example 4.1.
Let p >1 . If f i * (t ) = t p -1 for each 1 ...4;i ...4;n , then ( f i * ) -1 ( t ) = t q -1 for i =1,2 , ... ,n , where ( 1 / p ) + ( 1 / q ) =1 . Hence [figure omitted; refer to PDF] In this case, Theorem 3.1can be expressed as following.
(i) One has [figure omitted; refer to PDF]
(ii) The system of explicit nonlinear gradient flows of Φ with respect to the structure ( ( f 1 * , ... , f n * ) , ( g 1 , ... , g n ) , X 1 × ... × X n ) is [figure omitted; refer to PDF]
(iii): ψ i (t ) = t p for i =1 , ... ,n ( ψ i is convex and strictly increasing on [ 0 , + ∞ ) for each 1 ...4;i ...4;n ). Then
(a) [figure omitted; refer to PDF] for all a <s <t <b ,
(b) [figure omitted; refer to PDF]
: Under the hypotheses of Theorem 3.2, we have
(i) [figure omitted; refer to PDF]
: that is, lim [varepsilon] [arrow right]0 ∫ 0 s ... ∑ i =1 n | u i , [varepsilon] [variant prime] | d i , [varepsilon] p ( t ) dt = ∫ 0 s ... ∑ i =1 n | u i [variant prime] | d i p ( t ) dt .
(ii) [figure omitted; refer to PDF]
: that is, lim [varepsilon] [arrow right]0 ∫ 0 s ... ∑ i =1 n ( g i , [varepsilon] ( u 1 , [varepsilon] , ... , u n , [varepsilon] ) ( t ) ) q dt = ∫ 0 s ... ∑ i =1 n ( g i ( u 1 , ... , u n ) ( t ) ) q dt .
Example 4.2.
Considering the case p =2 in Example 4.1, we have
(i) [figure omitted; refer to PDF]
(ii) The system of explicit linear gradient flows of Φ with respect to the structure is [figure omitted; refer to PDF]
(iii): [figure omitted; refer to PDF]
: for all a <s <t <b . Moreover, we have [figure omitted; refer to PDF]
(iv) [figure omitted; refer to PDF]
(v) [figure omitted; refer to PDF]
Acknowledgment
Research by authors was supported by National Science Council of Taiwan under Grant no. NSC 100-2115-M-030-001.
[1] E. De Giorgi, A. Marino, M. Tosques, "Problems of evolution in metric spaces and maximal decreasing curve," Atti della Accademia Nazionale dei Lincei. Rendiconti. Classe di Scienze Fisiche, Matematiche e Naturali. Serie VIII , vol. 68, no. 3, pp. 180-187, 1980.
[2] M. Degiovanni, A. Marino, M. Tosques, "Evolution equations with lack of convexity," Nonlinear Analysis: Theory, Methods & Applications , vol. 9, no. 12, pp. 1401-1443, 1985.
[3] A. Marino, C. Saccon, M. Tosques, "Curves of maximal slope and parabolic variational inequalities on nonconvex constraints," Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV , vol. 16, no. 2, pp. 281-330, 1989.
[4] L. Ambrosio, "Minimizing movements," Accademia Nazionale delle Scienze detta dei XL. Rendiconti. V. , vol. 19, pp. 191-246, 1995.
[5] L. Ambrosio, N. Gigli, G. Savaré Gradient Flows in Metric Spaces and in the Space of Probability Measures , of Lectures in Mathematics ETH Zürich, pp. x+334, Birkhäuser, Basel, Switzerland, 2008., 2nd.
[6] E. Sandier, S. Serfaty, "Gamma-convergence of gradient flows with applications to Ginzburg-Landau," Communications on Pure and Applied Mathematics , vol. 57, no. 12, pp. 1627-1672, 2004.
[7] N. Q. Le, "A Gamma-convergence approach to the Cahn-Hilliard equation," Calculus of Variations and Partial Differential Equations , vol. 32, no. 4, pp. 499-522, 2008.
[8] N. Q. Le, "On the convergence of the Ohta-Kawasaki equation to motion by nonlocal Mullins-Sekerka law," SIAM Journal on Mathematical Analysis , vol. 42, no. 4, pp. 1602-1638, 2010.
[9] S. Serfaty, "Gamma-convergence of gradient flows on Hilbert and metric spaces and applications," Discrete and Continuous Dynamical Systems A , vol. 31, no. 4, pp. 1427-1451, 2011.
[10] R. L. Wheeden, A. Zygmund Measure and Integral , pp. x+274, Marcel Dekker, New York, NY, USA, 1977.
[]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2012 Mao-Sheng Chang and Bo-Cheng Lu. Mao-Sheng Chang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
We first establish the explicit structure of nonlinear gradient flow systems on metric spaces and then develop Gamma-convergence of the systems of nonlinear gradient flows, which is a scheme meant to ensure that if a family of energy functionals of several variables depending on a parameter Gamma-converges, then the solutions to the associated systems of gradient flows converge as well. This scheme is a nonlinear system edition of the notion initiated by Sylvia Serfaty in 2011.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer





