1. Introduction
In this paper, we consider the following structured convex optimization problem
F∗:=minx∈RnF(x):=f(x)+h(x),
wheref:domh→Randh:Rn→(−∞,∞]are closed proper convex functions, but not necessarily differentiable, anddomh:={x:h(x)<∞} is the effective domain of h. Problems of this type frequently arise in practice, such as compressed sensing [1], image reconstruction [2], machine learning [3], optimal control [4] and power system [5,6,7,8], and so forth. The following are three interesting examples.
Example 1.
(ℓ1 minimization in compressed sensing). The signal recovery problems in compressed sensing [1] usually take the following form
minx∈Rn12∥Ax−b∥2+λ∥x∥1,
whereA∈Rm×n,b∈Rm,λ>0and∥x∥1:=∑i=1n|xi|, which aims to get a sparse solution x of the linear systemAx=b. Note that by definingf(x)=12∥Ax−b∥2andh(x)=λ∥x∥1 , (2) is of the form of (1).
Example 2.
(Regularized risk minimization). At the core of many machine learning problems is to minimize a regularized risk function [9,10]:
minx∈RnRemp(x)+λΩ(x),
whereRemp(x):=1m∑i=1ml(ui,vi,x)is the empirical risk,{(ui,vi),i=1,⋯,m}is a training set, and l is a convex loss function measuring the gap between v and the predicted values generated by using x. In general,Remp(x)is a nondifferentiable and computationally expensive convex function, whereas the regularization termΩ(x)is a simple convex function, sayΩ(x)=12∥x∥22orΩ(x)=∥x∥1. By definingf(x)=Remp(x)andh(x)=λΩ(x) , (3) is also of the form of (1).
Example 3.
(Unconstrained transformation of a constrained problem). Given a constrained problem
min{f(x):x∈X},
where f is a convex function andX⊆Rnis a convex set. By introducing the indicator functionδXof X, that is,δX(x) equals 0 on X and infinity elsewhere, problem (4) can be written equivalently as
minx∈Rnf(x)+δX(x).
Clearly, by settingh(x)=δX(x) , (5) becomes the form of (1). We note that such transformation could be very efficient in practice if the set X has some special structure [11,12].
The design of methods for solving problems of the form (1) has attracted the attention of many researchers. We mention here four classes of these methods—operator splitting methods [13,14,15], alternating direction methods of multipliers [5,16,17,18,19], alternating linearization methods [20,21], and alternating linearization bundle method [22]. They all fall within the well-known class of first-order black-box methods, that is, it is assumed that there is an oracle that can return the (approximate) function value and one arbitrary (approximate) subgradient at any given point. Regarding the above methods, we are concerned about the following three points:
- Smoothness of one or both functions in the objective has been assumed for many of the methods.
-
Except for Reference [22], they all require the exact computation of the function values and (sub)gradients.
-
The alternating linearization methods [20,21] essentially assume that both functions f and h are “simple” in the sense that minimizing the function plus a separable convex quadratic function is easy.
However, for some practical problems, the functions may be nondifferentiable (nonsmooth), not easy to handle, and computationally expensive in the sense that the exact first-order information may be impossible to calculate, or be computable but difficult to obtain. For example, if f has the form
f(x):=sup{ϕu(x):u∈U},
where U is an infinite set andϕu(x):Rn→Ris convex for anyu∈U, then it is often difficult to calculate the exact function valuef(x). But for any toleranceϵ>0, we may usually find a lower approximationfxϵ≈f(x)in finite time such thatfxϵ∈[f(x)−ϵ,f(x)]andfxϵ=ϕuϵ (x)for someuϵ∈U. Then we can take a subgradient ofϕuϵ at x as an approximate subgradient of f at x. Another example is two-stage stochastic programming (see, e.g., References [23,24]), in which the function value is generated after solving a series of linear programs (details will be given in the section of numerical experiments), therefore its accuracy is determined by the tolerance of the linear programming solver. Some other practical examples, such as Lagrangian relaxation, chance-constrained programs and convex composite functions, can be found in Reference [25].
Based on the above observation, in this paper, we focus particularly on the case where the function f is general, possibly nonsmooth and its exact function values and subgradients may be difficult to obtain, whereas the function h is assumed to be relatively simple. Our main goal is to provide an efficient method, namely, the improved alternating linearization bundle method, for such kind of structured convex optimization problems. The basic tool we used here to handle nonsmoothness and inexactness is the bundle technique, since in the nonsmooth optimization community, bundle methods [26,27,28,29] are regarded as the most robust and reliable methods, whose variants have been well studied for handling inexact oracles [23,25,30,31,32,33].
Roughly speaking, our method generalizes the alternating linearization bundle method of Kiwiel [22] from exact and inexact oracles to various oracles, including exact, inexact, partially inexact, asymptotically exact and partially asymptotically exact oracles. These oracles are covered by the so-called on-demand accuracy oracles proposed by de Oliveira and Sagastizábal [23], whose accuracy is controlled by two parameters: a descent target and an error bound. More precisely, the accuracy is bounded by an error bound whenever the function estimation reaches a certain descent target. The most advantage of oracles with on-demand accuracy is that the function and subgradient estimations can be rough without an accuracy control for some “non-critical” iterates, thus the computational effort can be saved.
At each iteration, the proposed algorithm alternately solves two interrelated subproblems: one is to find the proximal point of the polyhedron model of f plus the linearization of h; the other is to find the proximal point of the linearization of f plus h. We establish global convergence of the algorithm under different types of inexactness. Finally, some preliminary numerical results on a set of two-stage stochastic linear programming problems show that our method is very encouraging.
This paper is organized as follows. In Section 2, we recall the condition of the inexact frist-order oracles. In Section 3, we present an improved alternating linearization bundle method for structured convex optimization with inexact first-order oracles and show some properties of the algorithm. In Section 4, we establish global convergence of the algorithm under different types of inexactness. In Section 5, we provide some numerical experiments on two-stage stochastic linear programming problems. The notations are standard. The Euclidean inner product inRnis denoted by〈x,y〉:=xTy, and the associated norm by∥·∥.
2. Preliminaries
For a given constantϵ≥0, theϵ -subdifferential of function f at x is defined by (see Reference [34])
∂ϵf(x):={g∈Rn:f(y)≥f(x)+〈g,y−x〉−ϵ,∀y∈Rn},
with∂f(x):=∂0f(x) being the usual subdifferential in convex analysis [35]. Each elementg∈∂f(x)is called a subgradient. For simplicity, we use the following notations:
fx: the approximate f value at x, that is,fx≈f(x);
gx: an approximate subgradient of f at x, that is,gx≈g(x)∈∂f(x);
Fx: the approximate F value at x, that is,Fx:=fx+h(x).
Aiming at the special structure of problem (1), we present a slight variant of the oracles with on-demand accuracy proposed in Reference [23] as follows: for a givenx∈Rn, a descent targetγxand an error boundεx≥0, the approximate valuesfx,gxandFxsatisfy the following condition
fx=f(x)−η(γx)withunknownη(γx)≥0,gx∈∂η(γx)f(x),andwheneverFx≤γx(descenttargetreached),therelationη(γx)≤εxholds.
From the relations in (6), we see that although the errorη(γx)is unknown, it has to be restricted within the error boundεxwhenever the descent targetFx≤γxis reached. This ensures that the exact and inexact function values satisfy
fx∈[f(x)−εx,f(x)]andf(x)∈[fx,fx+εx],wheneverFx≤γx.
The advantages of oracle (6) are that: (1) if the descent target is not reached, the calculation of oracle information can be rough without an accuracy control, which can potentially reduce the computational cost; (2) by properly choosing the parametersγxandεx , the oracle (6) covers various existing oracles:
-
Exact (Ex) [12,21]: setγx=+∞andεx=0;
-
Partially Inexact (PI) [24]: setγx<+∞andεx=0;
-
Inexact (IE) [11,25,32,36,37]: setγx=+∞andεx≡ε>0(possibly unknown);
-
Asymptotically Exact (AE) [38,39]: setγx=+∞andεx→0along the iterative process;
-
Partially Asymptotically Exact (PAE) [23]: setγx<+∞andεx→0.
3. The Generalized Alternating Linearization Bundle Method
In this section, we present our generalized alternating linearization bundle method with inexact first-order oracles for solving (1).
Let k be the current iteration index,xj,j∈Jk⊆{1,⋯,k}be given points generated by previous iterations, and the corresponding approximate valuesfxj /gxj be produced by the oracle (6). For notational convenience, we denote
fxj:=fxj ,gxj:=gxj ,Fxj:=Fxj ,εxj:=εxj ,γxj:=γxj .
The approximate linearizations of f atxjare given by
fj(·):=fxj+〈gxj,·−xj〉,j∈Jk.
From the second relation in (6), we have
f(·)≥f(xj)+〈gxj,·−xj〉−η(γxj)=fj(·),
which implies thatfjis a lower approximation to f. Next, it is natural to define the polyhedral inexact cutting-plane model of f by
fˇk(·):=maxj∈Jkfj(·),
which is obviously a lower polyhedral model for f, that is,fˇk(·)≤f(·).
Letx^k(called stability center) be the “best” point obtained so far, which satisfies thatx^k=xk(l)for somek(l)≤k. Frequently, it holds thatfxk(l)=minj=1,⋯,kfxj . Thus, from (7), we have
f(x^k)∈[fx^k,fx^k+εxk(l)],wheneverFx^k≤γx^k.
By applying the bundle idea to the “complex” function f, and keeping the simple function h unchanged, similar to traditional proximal bundle methods (see, e.g., Reference [28]), we may solve the following subproblem to obtain a new iteratexk+1:
xk+1:=argminfˇk(·)+h(·)+12tk∥·−x^k∥2,
wheretk>0 is a proximal parameter. However, subproblem (10) is generally not easy to solve, so by making use of the alternating linearization idea of Kiwiel [22], we solve two easier subproblems instead of (10). These two subproblems are interrelated: one is to find the proximal point of the polyhedron modelfˇplus the linearization of h, aiming at generating an aggregate linear model of f for use in the second subproblem; the other is to find the proximal point of the aggregate linear model of f plus h, aiming at obtaining a new trial point.
Now, we are ready to present the details of our algorithm, which generalizes the work of Kiwiel [22]. We note that the choice of the model functionfˇk in the algorithm may be different from the form of (8), since the subgradient aggregation strategy [40] is used to compress the bundle. The algorithm generates three sequences of iterates as follows:{yk}, the sequence of intermediate points, at which the aggregate linear models of f are generated;{xk}, the sequence of trial points;{x^k}, the sequence of stability centers.
We make some comments about Algorithm 1 as follows.
Algorithm 1 Generalized alternating linearization bundle method |
Step 0 (Initialization). Select an initial pointx1∈Rn, constantsκ∈(0,1),tmin>0, and an initial stepsizet1≥tmin . Call the oracle (6) atx1to compute the approximate valuesfx1andgx1. Choose an initial error boundεx1≥0and a descent targetγx1=+∞. Setx^1:=x1,fx^1:=fx1,Fx^1:=fx^1+h(x^1),f¯0:=f1, andh¯0(·):=h(x1)+〈ph0,·−x1〉withph0∈∂h(x1). Letit1:=0,l:=1,k(l):=1andk:=1.
Step 1 (Model selection). Choosefˇk:Rn→Rclosed convex and such that
max{f¯k−1,fk}≤fˇk≤f. Step 2 (Solve f-subproblem). Set yk+1:=argminϕfk(·):=fˇk(·)+h¯k−1(·)+12tk∥·−x^k∥2, f¯k(·):=fˇk(yk+1)+〈pfk,·−yk+1〉withpfk:=1tk(x^k−yk+1)−phk−1. Step 3 (Solve h-subproblem). Set xk+1:=argminϕhk(·):=f¯k(·)+h(·)+12tk∥·−x^k∥2, h¯k(·):=h(xk+1)+〈phk,·−xk+1〉withphk:=1tk(x^k−xk+1)−pfk. Step 4 (Stopping criterion). Compute vk:=Fx^k−f¯k(xk+1)+h(xk+1),pk:=1tk(x^k−xk+1),ϵk:=vk−tk ∥pk∥2. Ifmax{∥pk∥,ϵk}=0, stop. Step 5 (Noise attenuation). Ifvk<−ϵk, settk:=10tk,itk:=k, and go back to Step 2. Step 6 (Call oracle). Select a new error boundεxk+1≥0and a new descent targetγxk+1∈R∪{+∞} . Call the oracle (6) to computefxk+1andgxk+1. Step 7 (Descent test). If the descent condition Fxk+1≤Fx^k−κvk holds, setx^k+1:=xk+1,Fx^k+1:=Fxk+1,itk+1:=0,k(l+1):=k+1, andl:=l+1(descent step); otherwise, setx^k+1:=x^k,Fx^k+1:=Fx^k,itk+1:=itk, andk(l+1)=k(l)(null step). Step 8 (Stepsize updating). For a descent step, selecttk+1≥tk. For a null step, either settk+1:=tkor choosetk+1∈[tmin,tk]ifitk+1=0. Step 9 (Loop). Letk:=k+1, and go to Step 1. |
Remark 1.
(i) Theoretically speaking, the model functionfˇkcan be the simplest formmax{f¯k−1,fk}, but in order to keep numerical stability, it may additionally consist of some active linearizations.
(ii) Alternately solving subproblems (11) and (13) can be regarded as the proximal alternating linearization method (e.g., Reference [21]) being applied to the functionfˇk+h.
(iii) Iffˇk is a polyhedral function, then subproblem (11) is equivalent to a convex quadratic programming and thus can be solved efficiently. In addition, if h is simple, subproblem (13) can also be solved easily, or even has a closed-form solution (sayh(x)=12∥x∥2).
(iv) The role of Step 5 is to reduce the impact of inexactness. The algorithm loops between steps 2–5 by increasing the step sizetkuntilvk≥−εk.
(v) The stability center, descent target and error bound keep unchanged in the loop between Steps 2 and 5 (vi) In order to establish global convergence of the algorithm, the descent target and error bound at Step 6 should be suitably updated. Some detailed rules are presented in the next section.
The following lemma summarizes some fundamental properties of Algorithm 1, whose proof is a slight modification of that in [22], Lemma 2.2.
Lemma 1.
(i) The vectorspfkandphk of (12) and (14) satisfy
pfk∈∂fˇk(yk+1)andphk∈∂h(xk+1).
The linearizationsf¯k,h¯k,F¯ksatisfy the following inequalities
f¯k≤fˇk,h¯k≤handF¯k:=f¯k+h¯k≤F.
(ii) The aggregate subgradientpk defined in (15) and the above linearizationF¯kcan be expressed as follows
pk=pfk+phk=1tk(x^k−xk+1),
F¯k(·)=F¯k(xk+1)+〈pk,·−xk+1〉.
(iii) The predicted descentvkand the aggregate linearization errorϵk of (15) satisfy
vk=tk ∥pk∥2+ϵkandϵk=Fx^k−F¯k(x^k).
(iv) The aggregate linearizationF¯kis also expressed
Fx^k−ϵk+〈pk,·−x^k〉=F¯k(·)≤F(·).
(v) Denote the optimality measure by
Vk:=max{∥pk∥,ϵk+〈pk,x^k〉},
which satisfies
Vk≤max{∥pk∥,ϵk}(1+∥x^k∥)
and
Fx^k≤F(x)+Vk(1+∥x∥),∀x.
(vi)We have the relations
vk≥−ϵk⇔tk∥pk ∥2/2≥−ϵk⇔vk≥tk ∥pk∥2/2,vk≥ϵk.
Moreover, ifFx^k≤γx^k, then we have−ϵk≤εxk(l)and
vk≥maxtk ∥pk∥22,|ϵk|ifvk≥−ϵk,
Vk≤max2vktk1/2,vk(1+∥x^k∥)ifvk≥−ϵk,
Vk<2εxk(l)tk1/2(1+∥x^k∥)ifvk<−ϵk.
Proof.
(i) From the optimality condition of subproblem (11), we obtain
0∈∂ϕfk(yk+1)=∂fkˇ(yk+1)+phk−1+1tk(yk+1−x^k)=∂fkˇ(yk+1)−pfk,
which impliespfk∈∂fˇk(yk+1). In addition, the fact thatf¯k(yk+1)=fkˇ(yk+1)yieldsf¯k≤fkˇ . Similarly, by the optimality condition of (14), we have
0∈∂ϕhk(xk+1)=pfk+∂h(xk+1)+1tk(xk+1−x^k)=∂h(xk+1)−phk,
which showsphk∈∂h(xk+1). Further fromh¯(xk+1)=h(xk+1), we obtainh¯k≤h. Finally, it follows that
F¯k=f¯k+h¯k≤fˇk+h≤F.
(ii) By (14), we obtain
pfk+phk=pfk+1tk(x^k−xk+1)−pfk=1tk(x^k−xk+1)=pk.
Utilizing the linearity ofF¯k(·) , (12) and (19), we derive
F¯k(·)=fk¯(·)+h¯k(·)=fˇk(yk+1)+〈pfk,·−yk+1〉+h(xk+1)+〈phk,·−xk+1〉=f¯k(xk+1)−〈pfk,xk+1−yk+1〉+〈pfk,·−yk+1〉+h(xk+1)+〈phk,·−xk+1〉=fk¯(xk+1)+〈pfk,·−xk+1〉+h(xk+1)+〈phk,·−xk+1〉=F¯k(xk+1)+〈pk,·−xk+1〉.
(iii) We obtain directlyvk=ϵk+tk ∥pk∥2 from (15). Combining (15) and (ii), we have
ϵk=vk−tk ∥pk∥2=Fx^k−[f¯k(xk+1)+h(xk+1)]−tk ∥pk∥2=Fx^k−F¯k(x^k)+〈pk,x^k−xk+1〉−tk ∥pk∥2=Fx^k−F¯k(x^k).
(iv) Sinceϵk=vk−tk∥pk ∥2=Fx^k−[f¯k(xk+1)+h(xk+1)]−tk ∥pk∥2, the aggregate lineaizationF¯k(·)satisfies
Fx^k−ϵk+〈pk,·−x^k〉=F¯k(xk+1)+〈pk,·−xk+1〉=F¯k(·)≤F(·).
(v) Using the Cauchy-Schwarz inequality in the definition (22) gives
Vk=max{∥pk∥,ϵk+〈pk,x^k〉}≤max{∥pk∥,ϵk+∥pk∥∥x^k∥}≤max{∥pk∥,ϵk}+∥pk∥∥x^k∥≤max{∥pk∥,ϵk}(1+∥x^k∥).
From (21), we have
Fx^k≤F(x)+ϵk−〈pk,x−x^k〉=F(x)+ϵk−〈pk,x〉+〈pk,x^k〉≤F(x)+∥pk∥∥x∥+ϵk+〈pk,x^k〉≤F(x)+max{∥pk∥,ϵk+〈pk,x^k〉}(1+∥x∥)=F(x)+Vk(1+∥x∥),∀x.
(vi) By (iii), it is easy to get (25). Next, by (18), (20) and (9), we conclude that, ifFx^k≤γx^k,
−ϵk=F¯k(x^k)−Fx^k≤F(x^k)−Fx^k=f(x^k)−fx^k≤εxk(l).
Relation (26) follows from the facts thatvk≥ϵkandvk≥tk ∥pk∥2/2 . Relation (27) follows from (23),∥pk∥≤(2vktk)1/2andϵk≤vk. Finally, ifvk<−ϵk, we obtain∥pk ∥2<−2ϵktk, which together with−ϵk≤εxk(l)shows that∥pk∥<(2εxk(l)tk)1/2 , and therefore (28) holds. □
Remark 2.
Relation (17) shows thatpfkis a subgradient of the model functionfˇkatyk+1and thatphkis a subgradient of h atxk+1.Vk defined by (22) can be viewed as an optimality measure of the iterates, which will be proved to converge to zero in the next section. Relation (24) is also a test for optimality, in thatx^k is an approximate optimal solution to problem (1) wheneverVkis sufficiently small.
4. Global Convergence
This section aims to establish the global convergence of Algorithm 1 for various oracles. These oracles are controlled by two parameters: the error boundεxand the descent targetγx . In Table 1, we present the choices of these two parameters for different type of instances described in Section 2, including Exact (Ex), Partially Inexact (PI), Inexact (IE), Asymptotically Exact (AE) and Partially Asymptotically Exact (PAE) oracles, where the constants are selected asθ,κ∈(0,1), andκϵ∈(0,κ).
The following lemma is crucial to guarantee the global convergence of Algorithm 1.
Lemma 2.
The descent target is always reached at the stability centers, that is,Fx^k≤γx^kfor allk≥1.
Proof.
For instances Ex, IE and AE, sinceγxk=+∞, the claim holds immediately.
For instances PI and PAE, we haveγxk+1=Fx^k−θκvk. Thus, fork=1, from Step 0 it follows thatx^1=x1,fx^1=fx1andγx^1=γx1=+∞. This impliesFx^1=fx^1+h(x^1)≤γx^1. In addition, fork≥2, sinceθ∈(0,1) , once the descent test (16) is satisfied at iterationk−1, we have
Fx^k≤Fx^k−1−κvk−1≤Fx^k−1−θκvk−1=γxk=γx^k.
□
The following lemma shows that an (approximate) optimal solution can be obtained whenever the algorithm terminates finitely or loops infinitely between Steps 2 and 5.
Lemma 3.If either Algorithm 1 terminates at the kth iteration at Step 4, or loops between Steps 2 and 5 infinitely, then
(i)
x^k is an optimal solution to problem (1) for instances Ex and PI.
(ii)
x^kis ε-optimal, that is,F(x^k)≤F∗+ε, for instance IE.
(iii)
x^kisεxk(l)-optimal, that is,F(x^k)≤F∗+εxk(l), for instances AE and PAE.
Proof.
Firstly, suppose that Algorithm 1 terminates at Step 4 with iteration k. Then from (23), we haveVk=0 . This together with (24) shows that
Fx^k≤inf{F(x):x∈Rn}=F∗.
Thus, from (7), we can conclude that:F(x^k)=Fx^k≤F∗for instances Ex and PI;F(x^k)≤Fx^k+ε≤F∗+εfor instance IE;F(x^k)≤Fx^k+εxk(l)≤F∗+εxk(l)for instances AE and PAE.
Secondly, suppose that Algorithm 1 loops between Steps 2 and 5 infinitely. Then from Lemma 2 and the condition at Step 5, it follows that (28) holds andtk↑∞. Thus, we obtainVk→0 . This along with (24) implies (29), and therefore the claims hold by repeating the corresponding lines in first case. □
From the above lemma, in what follows, we may assume that Algorithm 1 neither terminates finitely nor loops infinitely between Steps 2 and 5. In addition, as in Reference [22], we assume that the model subgradientspfk∈∂fˇk(yk+1) in (17) satisfy that{pfk}is bounded if{yk}is bounded.
Algorithm 1 must take only one of the following two cases:
(i) the algorithm generates finitely many descent steps;
(ii) the algorithm generates infinitely many descent steps.
We first consider case (i), in which two subcases may occur:t∞:=limk tk=∞andt∞<∞. The first subcase oft∞=∞is analyzed in the following lemma.
Lemma 4.
Suppose that Algorithm 1 generates finitely many descent steps, that is, there exists an indexk¯such that only null steps occur for allk≥k¯, and thatt∞=∞. DenoteK:={k≥k¯:tk+1>tk}, thenVk→0ask∈K,k→∞.
Proof.
For the last timetkincreases before Step 5 fork∈K, one has
Vk<2εxk¯tk1/21+x^k¯,
which along withtk→∞shows the lemma. □
The following lemma analyzes the second subcase oft∞<∞.
Lemma 5.
Suppose that there existsk¯such thatx^k=x^k¯andtmin≤tk+1≤tkfor allk≥k¯ . If the descent criterion (16) fails for allk≥k¯, thenVk→0.
Proof.
In view of the facts thattmin≤tk+1≤tkandx^k=x^k¯for allk≥k¯, we know that only null steps occur andtk does not increase at Step 5. By Taylor’s expansion, Cauchy-Schwarz inequality, and the properties of subproblems (11) and (13), we can conclude thatvk→0 , so the conclusion holds from (27). For more details, one can refer to [11], Lemma 3.2. □
By combining Lemmas 4 and 5, we have the following lemma.
Lemma 6.
Suppose that there existsk¯such that only null steps occur for allk≥k¯. LetK:={k≥k¯:tk+1>tk}iftk→∞;K:={k:k≥k¯}otherwise. ThenVk→K0.
Now, we can present the main convergence result for the case where the algorithm generates finitely many descent steps.
Theorem 1.
Suppose that Algorithm 1 generates finitely many descent steps, and thatx^k¯is the last stability center. Then,x^k¯ is an optimal solution to problem (1) for instances Ex and PI; an ε-optimal solution for IE; and anεxk¯(l)-optimal solution for AE and PAE.
Proof.
Under the stated assumption, we know thatx^k≡x^k¯andfx^k≡fx^k¯for allk≥k¯ . This together with (24) and Lemma 6 shows that
Fx^k¯≤inf{F(x):x∈Rn}=F∗.
Hence, similar to the proof of Lemma 3, we obtain the results of the theorem. □
Next, we consider the second case where the algorithm generates infinitely many descent steps.
Lemma 7.
Suppose that Algorithm 1 generates infinitely many descent steps, and thatFx^∞:=limkFx^k>−∞. LetK:={k:Fx^k+1<Fx^k}. Thenvk→K0andlim¯k∈K Vk=0. Moreover, if{x^k}is bounded, thenVk→K0.
Proof.
From the descent test condition (16), we may first prove thatvk→K0, and thereforeϵk,tk ∥pk∥2→K0 from (26) and∥pk∥→K0from the fact thattk≥tmin. It can be further proved thatϵk,∥pk∥→K0, so it followslim¯k∈K Vk=0from the definition ofVk. Moreover, under the condition that{x^k}is bounded, we haveVk→K0 by (v) of Lemma 1. For more details, one can refer to [11], Lemma 3.4. □
Finally, we present the convergence results for the second case.
Theorem 2.
Suppose that Algorithm 1 generates infinitely many descent steps,Fx^∞>−∞, and that the index set K is defined in Lemma 7. Then
(i)
F∗≤lim¯k∈KF(x^k+1)≤lim¯k∈KF(x^k+1)≤Fx^∞+ε for instance IE in Table 1;
(ii)
F∗≤lim¯k∈KF(x^k+1)≤lim¯k∈KF(x^k+1)≤Fx^∞ for the remaining instances in Table 1;
(iii)
lim¯k∈K Vk=0andFx^k↓Fx^∞≤F∗.
Proof.
It is obvious that
F∗≤lim¯k∈KF(x^k+1)≤lim¯k∈KF(x^k+1).
(i) For instance IE, it follows thatεx^k+1=εandFx^k+1≤γx^k+1=+∞for allk∈K . Then from (7), we haveF(x^k+1)≤Fx^k+1+ε,∀k∈K, which implies
lim¯k∈KF(x^k+1)≤limk∈KFx^k+1+ε=Fx^∞+ε.
This along with (30) shows part (i).
(ii) Next, the other four instances in Table 1 are considered separately.
For instance Ex, we haveεx^k+1=0,Fx^k+1≤γx^k+1=+∞andF(x^k+1)=Fx^k+1. This implies
lim¯k∈KF(x^k+1)=limk∈KFx^k+1=Fx^∞.
For instance PI, we haveεx^k+1=0andγx^k+1=Fx^k−θκvkfor allk∈K. Thus, we obtain
Fx^k+1≤Fx^k−κvk≤Fx^k−θκvk=γx^k+1.
This impliesF(x^k+1)=Fx^k+1, and therefore
lim¯k∈KF(x^k+1)=limk∈KFx^k+1=Fx^∞.
For instance AE, we haveεx^k+1=κϵ vkandFx^k+1≤γx^k+1=+∞for allk∈K, which implies
F(x^k+1)≤Fx^k+1+εx^k+1≤Fx^k+1+κϵ vk.
This along with Lemma 7 (vk→K0) shows that
lim¯k∈KF(x^k+1)≤limk∈KFx^k+1=Fx^∞.
For instance PAE, we haveεx^k+1=κϵ vkandγx^k+1=Fx^k−θκvkfor allk∈K. Then, it follows that
Fx^k+1≤Fx^k−κvk≤Fx^k−θκvk=γx^k+1,
which implies
F(x^k+1)≤Fx^k+1+κϵ vk.
Again from Lemma 7, we obtain
lim¯k∈KF(x^k+1)≤limk∈KFx^k+1=Fx^∞.
Summarizing the above analysis and noticing (30), we complete the proof of part (ii).
(iii) From Lemma 7, we know thatlim¯k∈K Vk=0 . This together with (24) shows part (iii). □
5. Numerical Experiments
In this section, we aim to test the numerical efficiency of the proposed algorithm. In the fields of production and transportation, finance and insurance, power industry, and telecommunications, decision makers usually need to solve problems with uncertain information. As an effective tool to solve such problems, stochastic programming (SP) has attracted more and more attention and research on its practical instances and theories; see, for example, References [41,42]. We consider a class of two-stage SP problems with fixed recourse, whose discretization of uncertainty into N scenarios has the form (see e.g., References [23,43])
minf(x):=〈c,x〉+∑i=1N pi Vi(x)s.t.x∈X:={x∈R+n1 :Ax=b},
where x is the first-stage decision variable,c∈Rn1 ,A∈Rm1×n1, andb∈Rm1 . In addition, the recourse function is
Vi(x):=minπ∈R+n2 {〈q,π〉:Wπ=hi−Tix},
where corresponding to the ith scenario(hi,Ti), with probabilitypi>0forhi∈Rm2 andTi∈Rm2×n1. Hereπis the second-stage decision variable.
Clearly, by introducing the indicator functionδX , problem (31) can be written as the form of (5), and then becomes the form of (1) by settingh(x)=δX(x).
The above recourse function can be written as its dual form:
Vi(x)=maxy∈Rm2 〈hi−Tix,y〉s.t.WTy≤q,
whereq∈Rn2 andW∈Rm2×n2 . By solving these linear programming problems to return solutions with precision up to a given tolerance, one can establish an inexact oracle in the form (6), see Reference [23] for more detailed description.
The instances of SP problems are downloaded from the link: http://pwp.gatech.edu/guanghui-lan/computer-codes/.
Four instances are tested, namely, SSN(50), SSN(100), 20-term(50), 20-term(100), where the integers in the brackets mean the number of scenarios N. Here, the SSN instances come from the telecommunications and have been studied by Sen, Doverspike, and Cosares [44]. And the 20-term instances come from the motor freight carrier’s problem and have been studied by Mak, Morton, and Wood [45]. The dimensions of these instances are listed in Table 2.
The parameters are selected as:κ=0.04,tmin=0.1andt1=1.1 . The maximum bundle size is set to be 35. All the tests were performed in MATLAB (R2014a) on a PC with Intel(R) Core(TM) i7-4790 CPU 3.60GHz, 4GB RAM. The quadratic programming and linear programming subproblems were solved by the MOSEK 8 toolbox for MATLAB; see http://www.mosek.com.
We first compare our algorithm (denoted by GALBM) with the accelerated prox-level method (APL) in Reference [43], where the tolerances of the linear programming solver of MOSEK are set by default. The results are listed in Table 3, in which the number of iterations (NI), the consumed CPU time in seconds (Time), and the returned minimum values (f∗ ) are compared. Note that we use the MATLAB commands tic and toc to measure the consumed CPU time. For each instance, we run 10 times and report the average CPU time. From Table 3, we see that, when similar solution quality is achieved, GALBM can significantly outperform than APL in terms of the number of iterations and CPU time.
In what follows, we are interested in evaluating the impact of inexact oracles for GALBM. In more detail, we carry out two groups of tests. The first group adopts fixed tolerances, that is,εxk+1≡ε , and the corresponding results are reported in Table 4, Table 5, Table 6 and Table 7. Whereas the second group adopts dynamic tolerances with a safeguard parameterμ>0, that is,εxk+1=min{μ,κϵ vk}withκϵ=0.7 , and the corresponding results are reported in Table 8, Table 9, Table 10 and Table 11. The symbol “-” in the following tables means that the number of iterations for the corresponding instance is greater than 500.
6. Conclusions In this paper, we have proposed a generalized alternating linearization bundle method for solving structured convex optimization with inexact first-order oracles. Our method can handle various inexact data by making use of the so-called on-demand accuracy oracles. At each iteration, two interrelated subproblems are solved alternately, aiming to reduce the computational cost. Global convergence of the algorithm is established under different types of inexactness. Numerical results show that the proposed algorithm is promising.
Instances | εxk+1 | γxk+1 |
---|---|---|
Ex | 0 | +∞ |
PI | 0 | Fx^k−θκvk |
IE | ε>0 | +∞ |
AE | κϵ vk | +∞ |
PAE | κϵ vk | Fx^k−θκvk |
Name | n1 | m1 | n2 | m2 |
---|---|---|---|---|
SSN | 89 | 1 | 706 | 175 |
20-term | 63 | 3 | 764 | 124 |
Name | Algorithm | NI | Time | f∗ |
---|---|---|---|---|
SSN(50) | GALBM | 105 | 28.12 | 4.838238 |
APL | 147 | 72.40 | 4.838278 | |
SSN(100) | GALBM | 95 | 49.58 | 7.352609 |
APL | 155 | 156.53 | 7.352618 | |
20-term(50) | GALBM | 132 | 47.62 | 2.549453×105 |
APL | 156 | 106.51 | 2.549453×105 | |
20-term(100) | GALBM | 173 | 128.23 | 2.532875×105 |
APL | 261 | 364.93 | 2.532876×105 |
No. | εx | NI | Time | f∗ | ||
---|---|---|---|---|---|---|
1 | 10−4 | 206 | 55.21 | 4.838157 | ||
2 | 10−5 | 202 | 50.37 | 4.838163 | ||
3 | 10−6 | 190 | 49.64 | 4.838247 | ||
4 | 10−7 | 132 | 34.35 | 4.838156 | ||
5 | 10−8 | 105 | 27.81 | 4.838238 | ||
6 | 10−9 | 97 | 24.49 | 4.838188 | ||
7 | 10−10 | 99 | 25.62 | 4.838136 | ||
8 | 10−11 | 84 | 22.98 | 4.838191 | ||
9 | 10−12 | 97 | 27.47 | 4.838128 | ||
10 | 10−13 | 84 | 25.53 | 4.838231 |
No. | εx | NI | Time | f∗ | ||
---|---|---|---|---|---|---|
1 | 10−4 | - | - | - | ||
2 | 10−5 | 224 | 109.21 | 7.352932 | ||
3 | 10−6 | 194 | 94.80 | 7.352854 | ||
4 | 10−7 | 127 | 62.17 | 7.352937 | ||
5 | 10−8 | 95 | 46.93 | 7.352758 | ||
6 | 10−9 | 87 | 43.03 | 7.352750 | ||
7 | 10−10 | 79 | 41.55 | 7.353074 | ||
8 | 10−11 | 83 | 43.76 | 7.352939 | ||
9 | 10−12 | 81 | 44.08 | 7.352748 | ||
10 | 10−13 | 81 | 46.77 | 7.352734 |
No. | εx | NI | Time | f∗ | ||
---|---|---|---|---|---|---|
1 | 10−2 | 250 | 80.86 | 2.549490×105 | ||
3 | 10−3 | 144 | 49.12 | 2.549466×105 | ||
3 | 10−4 | 165 | 57.49 | 2.549463×105 | ||
4 | 10−5 | 164 | 54.12 | 2.549460×105 | ||
5 | 10−6 | 211 | 72.42 | 2.549461×105 | ||
6 | 10−7 | 178 | 62.19 | 2.549459×105 | ||
7 | 10−8 | 132 | 44.56 | 2.549457×105 | ||
8 | 10−9 | 175 | 61.53 | 2.549461×105 | ||
9 | 10−10 | 132 | 49.53 | 2.549457×105 | ||
10 | 10−11 | 258 | 99.69 | 2.549457×105 | ||
11 | 10−12 | 175 | 67.70 | 2.549461×105 | ||
12 | 10−13 | 183 | 71.52 | 2.549461×105 |
No. | εx | NI | Time | f∗ | |
---|---|---|---|---|---|
1 | 10−2 | 227 | 141.74 | 2.532914×105 | |
2 | 10−3 | - | - | - | |
3 | 10−4 | 140 | 96.02 | 2.532879×105 | |
4 | 10−5 | 179 | 117.71 | 2.532877×105 | |
5 | 10−6 | 152 | 99.29 | 2.532879×105 | |
6 | 10−7 | 139 | 95.07 | 2.532876×105 | |
7 | 10−8 | 173 | 128.44 | 2.532876×105 | |
8 | 10−9 | 143 | 103.51 | 2.532878×105 | |
9 | 10−10 | - | - | - | |
10 | 10−11 | 159 | 110.21 | 2.532877×105 | |
11 | 10−12 | 150 | 112.37 | 2.532878×105 | |
12 | 10−13 | 132 | 99.46 | 2.532878×105 |
No. | μ | NI | Time | f∗ | ||
---|---|---|---|---|---|---|
1 | 10−3 | - | - | - | ||
2 | 10−4 | 201 | 50.54 | 4.838163 | ||
3 | 10−5 | 202 | 49.07 | 4.838163 | ||
4 | 10−6 | 190 | 49.21 | 4.838247 | ||
5 | 10−7 | 132 | 32.26 | 4.838156 | ||
6 | 10−8 | 105 | 27.10 | 4.838238 |
No. | μ | NI | Time | f∗ | ||
---|---|---|---|---|---|---|
1 | 10−3 | - | - | - | ||
2 | 10−4 | - | - | - | ||
3 | 10−5 | 284 | 141.85 | 7.353010 | ||
4 | 10−6 | 194 | 97.01 | 7.352854 | ||
5 | 10−7 | 127 | 63.83 | 7.352937 | ||
6 | 10−8 | 95 | 47.97 | 7.352758 |
No. | μ | NI | Time | f∗ | ||
---|---|---|---|---|---|---|
1 | 10−2 | 199 | 65.09 | 2.549485×105 | ||
2 | 10−3 | 140 | 44.73 | 2.549462×105 | ||
3 | 10−4 | 165 | 54.08 | 2.549463×105 | ||
4 | 10−5 | 164 | 54.18 | 2.549460×105 | ||
5 | 10−6 | 211 | 70.67 | 2.549461×105 | ||
6 | 10−7 | 178 | 61.90 | 2.549459×105 | ||
7 | 10−8 | 132 | 46.94 | 2.549457×105 |
No. | μ | NI | Time | f∗ | |
---|---|---|---|---|---|
1 | 10−2 | 191 | 119.40 | 2.532901×105 | |
2 | 10−3 | 143 | 92.32 | 2.532881×105 | |
3 | 10−4 | 140 | 91.68 | 2.532878×105 | |
4 | 10−5 | 179 | 121.12 | 2.532877×105 | |
5 | 10−6 | 146 | 104.27 | 2.532878×105 | |
6 | 10−7 | 170 | 120.29 | 2.532879×105 | |
7 | 10−8 | 173 | 126.57 | 2.532876×105 |
Author Contributions
C.T. mainly contributed to the algorithm design and convergence analysis; Y.L. and X.D. mainly contributed to the convergence analysis and numerical results; and B.H. mainly contributed to the numerical results. All authors have read and agree to the published version of the manuscript.
Funding
This work was supported by the National Natural Science Foundation (11761013) and Guangxi Natural Science Foundation (2018GXNSFFA281007) of China.
Acknowledgments
The authors would like to thank for the support funds.
Conflicts of Interest
The authors declare no conflict of interest.
1. Tsaig, Y.; Donoho, D.L. Compressed sensing. IEEE Trans. Inform. Theory 2006, 52, 1289-1306.
2. Jung, M.; Kang, M. Efficient nonsmooth nonconvex optimization for image restoration and segmentation. J. Sci. Comput. 2015, 62, 336-370.
3. Sar, S.; Nowozin, S.; Wright, S.J. Optimization for Machine Learning; Massachusetts Institute of Technology Press: Cambridge, MA, USA, 2012.
4. Clarke, F.H.; Ledyaev, Y.S.; Stern, R.J.; Wolenski, P.R. Nonsmooth Analysis and Control Theory; Springer: New York, NY, USA, 1998.
5. Yang, L.F.; Luo, J.Y.; Jian, J.B.; Zhang, Z.R.; Dong, Z.Y. A distributed dual consensus ADMM based on partition for DC-DOPF with carbon emission trading. IEEE Trans. Ind. Inform. 2020, 16, 1858-1872.
6. Yang, L.F.; Zhang, C.; Jian, J.B.; Meng, K.; Xu, Y.; Dong, Z.Y. A novel projected two-binary-variable formulation for unit commitment in power systems. Appl. Energ. 2017, 187, 732-745.
7. Yang, L.F.; Jian, J.B.; Zhu, Y.N.; Dong, Z.Y. Tight relaxation method for unit commitment problem using reformulation and lift-and-project. IEEE Trans. Power. Syst. 2015, 30, 13-23.
8. Yang, L.F.; Jian, J.B.; Xu, Y.; Dong, Z.Y.; Ma, G.D. Multiple perspective-cuts outer approximation method for risk-averse operational planning of regional energy service providers. IEEE Trans. Ind. Inform. 2017, 13, 2606-2619.
9. Teo, C.H.; Vishwanathan, S.V.N.; Smola, A.J.; Le, Q.V. Bundle methods for regularized risk minimization. Mach. Learn. Res. 2010, 11, 311-365.
10. Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization Methods for Machine Learning. SIAM Rev. 2018, 60, 223-311.
11. Kiwiel, K.C. A proximal-projection bundle method for Lagrangian relaxation, including semidefinite programming. SIAM J. Optim. 2006, 17, 1015-1034.
12. Tang, C.M.; Jian, J.B.; Li, G.Y. A proximal-projection partial bundle method for convex constrained minimax problems. J. Ind. Manag. Optim. 2019, 15, 757-774.
13. Paul, T. Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM Cont. Optim. 1991, 29, 119-138.
14. Mahey, P.; Tao, P.D. Partial regularization of the sum of two maximal monotone operators. Math. Model. Numer. Anal. 1993, 27, 375-392.
15. Eckstein, J. Some saddle-function splitting methods for convex programming. Optim. Meth. Soft. 1994, 4, 75-83.
16. Fukushima, M. Application of the alternating direction method of multipliers to separable convex programming problems. Comput. Optim. Appl. 1992, 1, 93-111.
17. He, B.S.; Tao, M.; Yuan, X.M. Alternating direction method with gaussian back substitution for separable convex programming. SIAM J. Optim. 2012, 22, 313-340.
18. He, B.S.; Tao, M.; Yuan, X.M. Convergence rate analysis for the alternating direction method of multipliers with a substitution procedure for separable convex. Math. Oper. Res. 2017, 42, 662-691.
19. Chao, M.; Cheng, C.; Zhang, H. A linearized alternating direction method of multipliers with substitution procedure. Asia-Pac. Opera. Res. 2015, 32, 1550011.
20. Goldfarb, D.; Ma, S.; Scheinberg, K. Fast alternating linearization methods for minimizing the sum of two convex functions. Math. Program 2013, 141, 349-382.
21. Kiwiel, K.C.; Rosa, C.H.; Ruszczyński, A. Proximal decomposition via alternating linearization. SIAM J. Optim. 1999, 9, 668-689.
22. Kiwiel, K.C. An alternating linearization bundle method for convex optimization and nonlinear multicommodity flow problems. Math. Program 2011, 130, 59-84.
23. DeOliveira, W.; Sagastizábal, C. Level bundle methods for oracles with on-demand accuracy. Optim. Method Softw. 2014, 29, 1180-1209.
24. Kiwiel, K.C. Bundle Methods for Convex Minimization with Partially Inexact Oracles; Technical Report; Systems Research Institute, Polish Academy of Sciences: Warsaw, Poland, 2009.
25. De Oliveira, W.; Sagastizábal, C.; Lemaréchal, C. Convex proximal bundle methods in depth: A unified analysis for inexact oracles. Math. Program. 2014, 148, 241-277.
26. Wolfe, P. A method of conjugate subgradients for minimizing nondifferentiable functions. Math. Program. 1975, 3, 145-173.
27. Mäkelä, M. Survey of bundle methods for nonsmooth optimization. Optim. Meth. Soft. 2002, 17, 1-29.
28. Bonnans, J.F.; Gilbert, J.C.; Lemaréchal, C.; Sagastizábal, C. Numerical Optimization: Theoretical and Practical Aspects, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2006.
29. Lemaréchal, C. An extension of davidon methods to nondifferentiable problems. Math. Program. 1975, 3, 95-109.
30. Kiwiel, K.C. Approximations in proximal bundle methods and decomposition of convex programs. J. Optim. Theory. Appl. 1995, 84, 529-548.
31. Hintermuller, M. A proximal bundle method based on approximate subgradients. Comput. Optim. Appl. 2001, 20, 245-266.
32. Kiwiel, K.C. A proximal bundle method with approximate subgradient linearizations. SIAM J. Optim. 2006, 16, 1007-1023.
33. Kiwiel, K.C. An algorithm for nonsmooth convex minimization with errors. Math. Comput. 1985, 45, 173-180.
34. Hiriart-Urruty, J.B.; Lemaréchal, C. Convex Analysis and Minimization Algorithms. Number 305-306 in Grundlehren der mathematischen Wissenschaften; Springer: Berlin, Germany, 1993.
35. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 2015.
36. Malick, J.; De Oliveira, W.; Zaourar-Michel, S. Uncontrolled inexact information within bundle methods. Eur. J. Oper. Res. 2017, 5, 5-29.
37. De Oliveira, W.; Sagastizábal, C.; Scheimberg, S. Inexact bundle methods for two-stage stochastic programming. SIAM J. Optim. 2011, 21, 517-544.
38. Zakeri, G.; Philpott, A.B.; Ryan, D.M. Inexact cuts in benders decomposition. SIAM J. Optim. 2000, 10, 643-657.
39. Fábixaxn, C.I. Bundle-type methods for inexact data. Cent. Eur. J. Oper. Res. 2000, 8, 35-55.
40. Kiwiel, K.C. Methods of Descent for Nondifferentiable Optimization; Lecture Notes in Mathematics; Springer: Berlin, Germany, 1985.
41. Wallace, S.W.; Ziemba, W.T. Applications of Stochastic Programming; MPS-SIAM Ser. Optim.; SIAM: Philadelphia, PA, USA, 2005.
42. Shapiro, A.; Dentcheva, D.; Ruszczyński, A. Lectures on Stochastic Programming: Modeling and Theory; SIAM: Philadelphia, PA, USA, 2014.
43. Lan, G. Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization. Math. Program 2015, 149, 1-45.
44. Sen, S.; Doverspike, R.D.; Cosares, S. Network planning with random demand. Telecommun. Syst. 1994, 3, 11-30.
45. Mak, W.; Morton, D.P.; Wood, R.K. Monte Carlo bounding techniques for determining solution quality in stochastic programs. Oper. Res. Lett. 1999, 24, 47-56.
Chunming Tang*, Yanni Li, Xiaoxia Dong and Bo He
College of Mathematics and Information Science, Guangxi University, Nanning 540004, China
*Author to whom correspondence should be addressed.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2020. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In this paper, we consider a class of structured optimization problems whose objective function is the summation of two convex functions: f and h, which are not necessarily differentiable. We focus particularly on the case where the function f is general and its exact first-order information (function value and subgradient) may be difficult to obtain, while the function h is relatively simple. We propose a generalized alternating linearization bundle method for solving this class of problems, which can handle inexact first-order information of on-demand accuracy. The inexact information can be very general, which covers various oracles, such as inexact, partially inexact and asymptotically exact oracles, and so forth. At each iteration, the algorithm solves two interrelated subproblems: one aims to find the proximal point of the polyhedron model of f plus the linearization of h; the other aims to find the proximal point of the linearization of f plus h. We establish global convergence of the algorithm under different types of inexactness. Finally, some preliminary numerical results on a set of two-stage stochastic linear programming problems show that our method is very encouraging.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer