Content area
The elastic generalized assignment problem (eGAP) is a natural extension of the generalized assignment problem (GAP) where the capacities are not fixed but can be adjusted; this adjustment can be expressed by continuous variables. These variables might be unbounded or restricted by a lower or upper bound, respectively. This paper concerns techniques aiming at reducing several variants of eGAP to GAP, which enables us to employ standard approaches for the GAP. This results in a heuristic, which can be customized in order to provide solutions having an objective value arbitrarily close to the optimal. [PUBLICATION ABSTRACT]
Introduction
The elastic generalized assignment problem, eGAP for short, is a natural extension of the generalized assignment problem (GAP), a well-known combinatorial optimization problem with applications in production, logistics, and distribution planning, see Nauss (2004). The GAP is to assign jobs to agents in order to minimize assignment cost without extending a given capacity for each agent. While the capacity is fixed according to the GAP, the eGAP considers the opportunity to extend or reduce the available capacity. Extending the capacity incurs cost while reducing capacity provides a profit. The GAP is well known to be NP-hard, NP-hardness of eGAP follows straightforwardly, see Nauss (2004).
While there exist several efficient algorithms for the GAP, see, for example Yagiura et al (2006), which is based on their previous work, see Yagiura et al (2004) enriched by path relinking. Path relinking is a modern approach, that spread widely during the last decade. It is a generalization of scatter search, see Alfandari et al (2001), Laguna and Martí (2003) or Yagiura et al (2002), respectively. On the other hand, the eGAP has not been studied much so far, although it is of great practical relevance. To the best of our knowledge, there is only one recent paper by Nauss (2004) for a special case of the eGAP. It concerns a branch and bound algorithm to solve small but difficult instances. This algorithm is based on a previous work, which was developed for the GAP, see Nauss (2003).
Due to the possibility of adjusting the capacity, the well-known cover inequalities, see Nemhauser and Wolsey (1999), Schrijver (2003) as well as Fügenschuh and Martin (2005), cannot be used one-to-one. Therefore, the powerful formulations developed by Wolsey (2003) have to be adapted. This seems to be expensive here, see Ceria et al (1998), due to the number of possible cuts. Thus, we propose to reduce the problem to the well-known GAP, for which several efficient algorithms exist.
The exposition of our work is as follows: in the next section, we present the mixed integer formulation and develop some properties of solutions. The approaches with one variable for the adjustment at each agent, being unbounded or limited by one or two bounds, respectively, are given then. In the capacity adjustment with two variables section, we give a generalization by introducing for each agent an additional variable for the adjustment. In the last two sections, we provide some computational results and finally conclude the paper.
Model formulation and first insights
Given a set M ={1,...,m } of agents and a set N ={1,...,n } of jobs, the objective of the eGAP is to serve all jobs at minimum cost, regarding the adjusted capacity of each agent. To this end, the following binary decision variables are used: (Formula Omitted: See PDF)
The binary variables ensure that a job can only be served fully by an agent.
The following parameters are assumed to be given: each agent i provides a fixed amount of capacity bi (bi >0) of an arbitrary resource. In contrast to the GAP, the original capacity bi , can be adjusted. Continuous variable si represents the amount of capacity which is added (si >0) or removed (si <0), respectively. The costs for the capacity adjustment are ci , ci >0, which assumes, that the cost per unit for increasing capacity is the same as the revenue per unit for reducing capacity for a given agent.
Job j asks for a specific amount aij (aij >0) of the capacity of a particular agent if xij =1. For serving job j 's demand from agent i , the total cost cij (cij [> or =, slanted]0) occurs. These cost need not depend on the value aij .
Now, the problem can be stated mathematically as follows: (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) The objective function (1) (See PDF) is twofold. The first part represents the sum of assigned jobs' cost. The second part considers cost and sales when additional capacity is bought and superfluous capacity is sold, respectively. Constraints (2) (See PDF) ensure that the (modified) capacity limit of the agents is not exceeded. Constraints (3) (See PDF) in combination with (5) (See PDF) ensure, that each job is assigned exactly to one agent. Finally, constraints (4) (See PDF) give the bounds for the capacity adjustment. Obviously, if li >0 or ui <0, one can adjust the capacity in (2) first, obtaining a modified objective function value.
Before presenting reductions of eGAP to GAP for most cases, we introduce some definitions and basic properties, first.
For short, a solution of the eGAP can be expressed by the tuple (x ,s ), where x is the binary vector containing all values of xij ,i ∈M and j ∈N , and s is the vector of the corresponding capacity modifications si ,i ∈M . Additionally, we define the following capacity consumptions: (Formula Omitted: See PDF) where Wi gives the capacity consumption at agent i , if all jobs are assigned to this agent.
Note, that the number of feasible solutions of the eGAP is not smaller than that of a corresponding GAP with capacities of bi +li , for each i ∈M . This might be a burden, if a heuristic is used to solve the eGAP, which is based on local search, see for example Glover et al (2000).
Capacity adjustment with one variable
Capacity adjustment with no restrictive bounds
Before dealing with the one-sided and the two-sided limitation of the capacity adjustment in the following subsections, we first highlight the special case, where si is not limited by any effective bound, for example li [< or =, slant]-bi and ui [> or =, slanted]Wi -bi ∀i ∈M . Obviously, for each solution constraint (2) (See PDF) is tight.
Lemma 1
Given an optimal solution (x ,s ) with ∑j ∈Naijxij [> or =, slanted]bi +li for any i ∈M , the corresponding constraint (2) (See PDF) is fulfilled with equality for this agent .
Proof
Suppose for any i ∈M ,bi +li [< or =, slant]∑j ∈Naijxij <bi +si . Then, (x ,s ') with si '=∑j ∈Naijxij -bi <si is feasible and has objective value ∑i ∈M ∑j ∈Ncijxij +∑i ∈Mci ·si '<∑i ∈M ∑j ∈Ncijxij +∑i ∈Mci ·si . Therefore, (x ,s ) cannot be optimal. [white square]
Thus, si can be replaced in the objective by ∑j ∈Naijxij -bi while (4) (See PDF) is left out. After regrouping the objective function we obtain: (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) Problem (6 (See PDF) , 7 (See PDF) and 8 (See PDF) ) is solvable in linear time by setting xij =1 for each tuple (i ,j ) with j ∈N and i =argmini '∈Mci 'j , using an arbitrary tie breaker if needed. [white square]
Capacity adjustment with upper bounds
Assume, li [< or =, slant]-bi for each agent and there is an i '∈M ,ui ' [< or =, slant]Wi ' -bi ' . Here again, for each solution constraint (2) (See PDF) is tight, see Lemma 1. Thus we can replace si by ∑j ∈Naijxij -bi in (1) (See PDF) and (4) (See PDF) ∀i ∈M , where the lower bounds li are already left out. Now the problem can be stated as follows: (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) This is a standard GAP with modified cost and adjusted capacities. Obviously, for all i'' ∈M , for which ui'' >Wi'' -bi'' , constraint (11) (See PDF) can be left out, too.
Capacity adjustment with lower bounds
Now, we have an i '∈M ,li ' [> or =, slanted]-bi ' and ui [> or =, slanted]Wi -bi for each i ∈M . Here, (4) (See PDF) is simply replaced by li [< or =, slant]si ∀i ∈M , which does not change the underlying problem.
As no further direct transformation of the problem is possible here, in contrast to the previous section, we propose to transfer the concepts, developed for the knapsack problem with a continuous variable, KPC for short, see Büther and Briskorn (2007): we implement the binary coding of the adjustments for the eGAP.
First, we replace the capacity adjustment si by si =ui -[smacr ]i ∀i ∈M , see Nauss (2004). The new variables [smacr ]i represent the unused capacity adjustments and thus, they are bounded by 0[< or =, slant][smacr ]i [< or =, slant]ui -li ∀i ∈M . For illustration one can imagine, that in advance the capacities are extended at cost ci ·ui and then, during the optimization, the superfluous capacities can be sold which is expressed by [smacr ]i >0. Now, the continuous variables [smacr ]i are substituted by binary code [stilde ]i =∑ki ∈Dieikiyiki , with Di ={1,...,di }∀i ∈I , where di is the maximum number of used binary variables yiki at agent i . Here, yiki equals one if and only if the corresponding amount of capacity eiki is sold.
Now, we detail our requirements regarding di and eiki , see Büther and Briskorn (2007):
Values of [stilde ]i must not undershoot 0 or exceed ui -li respectively, that is 0[< or =, slant]∑ki ∈Dieikiyiki [< or =, slant]ui -li for each binary vector yi ={yi 1 ,...,yidi }.
Given a precision p ∈[dbl-struck N] for each value vi ∈[0,ui -li ] there must be a binary vector yi such that |vi -∑ki ∈Dieikiyiki |<10-p .
Regarding (i) and (ii) the number of additional binary variables di should be chosen as small as possible.
It is well known, that any integer value can be stated by a binary representation. Thus, by choosing the values for eiki appropriate, we can represent any integer number between 0 and ui -li . If ui -li =2r -1 with r ∈[dbl-struck N] then di =log2 (ui -li +1)=r and eiki =2ki -1 for each ki ∈Di is the obvious choice. Otherwise, if ui -li ≠2r -1 for all r ∈[dbl-struck N] we choose di =[left ceiling]log2 (ui -li +1)[right ceiling],eiki =2ki -1 for each ki ∈{1,...,di -1 } and eidi =ui -li -2di -1 +1.
Additionally, the concept can even be used approximately for real numbers for the adjustment si . Regarding the required precision p ∈[dbl-struck N] we apply the procedure described above to the value [left floor]10p ·(ui -li )[right floor]∀i ∈M . After obtaining values eiki , we divide them by 10p . Obviously, these transformed values suffice for condition (ii), see Müller-Bungart (2007). The number of additional variables is now di =[left ceiling]log2 ([left floor]10p (ui -li )[right floor]+1)[right ceiling]. Therefore, the approximation and thus the quality of the upper bound can be improved arbitrarily by increasing p .
The reformulation, namely eGAPbin , can now be represented as follows: (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF)
Lemma 2
The model formulation eGAPbin can be transferred to an equivalent standard GAP .
The basic idea of our proof is to extend the eGAPbin by adding artificial variables, which are then used to build the required constraints for a standard GAP.
Proof
First, the artificial binary variables zi 'iki ,∀i '∈M ,∀i ∈M ,∀ki ∈Di , with i '≠i are introduced. These variables correspond to the ki th extension at agent i and they enable us to 'sell' this capacity extension at agent i '. Obviously, for these variables we require ei 'iki =0∀i '≠i to have no influence neither on the constraints nor on the objective function value, as such a selling is not admissible. Then, we can formulate analog to (15) (See PDF) : (Formula Omitted: See PDF)
Apparently, combining (13 (See PDF) , 14 (See PDF) , 15 (See PDF) , 16 (See PDF) and 17 (See PDF) ) with the artificial variables and (18) (See PDF) , we get a standard GAP.
For each solution of the GAP, one can find a corresponding solution for the eGAPbin , having the same assignment of jobs and the same objective function value and the other way round.
First, for the GAP, if we have for an arbitrary yiki =0,i ∈M , then (18) (See PDF) reduces to ∑i '∈M \{i }zi 'iki =1. As all artificial variables have coefficients of zero, an arbitrary number of them will take on the value 1 and the remaining will be 0. Therefore, an equivalent solution for the eGAPbin can be found by simply setting yiki =0 and ignoring the artificial variables. Otherwise, if we have yiki =1, i ∈M , for the GAP, then (18) (See PDF) reduces to ∑i '∈M \{i }zi 'iki =0 and all artificial variables zi 'iki ,i '∈M \{i }, are zero. In this case, we have a corresponding solution for the eGAPbin , by setting yiki =1, too.
Now, given a solution for the eGAPbin , if yiki =0, i ∈M , one gets a corresponding solution for the GAP by setting an arbitrary artificial variable zi 'iki =1,i '∈M \{i }, and the remaining in (18) (See PDF) to 0. Therefore, one can find M -1 solutions for the GAP, having the same objective function value. Otherwise, if yiki =1,i ∈M , all artificial variables zi 'iki ,i '∈M \{i }, of the GAP have to be set to 0 and yiki =1, to generate the corresponding solution. [white square]
Combining the two cases for the 'real' variables for the adjustment in the GAP, either yiki or any corresponding artificial variable is set to one or for short yiki ∈{0,1}, as required in the eGAPbin , too. Summarized, fulfilling (18) (See PDF) in the GAP is equivalent to ensuring (17) (See PDF) in the eGAPbin .
Due to the binary coding, the number of variables increases. The original model, eGAP, has M ·N +M variables, where the eGAPbin already has M ·N +M ∑i ∈Mdi variables, with di =[left ceiling]log2 (ui -li +1)[right ceiling][> or =, slanted]1. If we extend the range of the adjustment si to real numbers, the value of di increases to di =[left ceiling]log2 (10p ·(ui -li )+1)[right ceiling].
Thus, the equivalent GAP for the eGAPbin contains N ' jobs, where N '=N +∑i ∈Mdi corresponding to the number of real jobs and the number of pseudo-jobs for the capacity adjustment.
Since we reduce the solution space to discrete values of [stilde ] within [0,ui -li ], where the domain of [stilde ]i , is a subset of the domain of [smacr ]i , the solution obtained via eGAPbin is a feasible solution for the original model eGAP, too. Thus, the solution to eGAPbin is an upper bound for the optimal solution to eGAP. Given an optimal solution (x ,[stilde ]) to eGAPbin the upper bound can be strengthened if bi +li <∑j ∈Naijxij <bi +ui -[stilde ]i . This appears, if the necessary adjustment is not presentable by binary coding regarding the precision p . Then, [stilde ]i can be increased to [left ceiling][stilde ]i [right ceiling]= =bi +ui -∑j ∈Naijxij , according to Lemma 1. In the following, we analyze the gap between the upper bound by eGAPbin and the optimal solution to eGAP. [white square]
Theorem 1
The gap Δ between the upper bound obtained by eGAPbin and the optimal objective value according to eGAP cannot be larger than ∑i ∈Mci ·10-p .
Proof
Let [left floor][smacr ]i [right floor][smacr ] denote the binary code which is closest to but not larger than [smacr ]i . Then, given the optimal solution (x ,[smacr ]) to eGAP and the optimal solution (x ',[stilde ]) to eGAPbin the following holds: (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) where the constant ∑i ∈Mci ·ui has been left out for simplification.
Relation (19) (See PDF) expresses that the improved optimal solution to eGAPbin is an upper bound to the optimal solution of eGAP. Obviously, this improved solution has no higher value than the optimal solution to eGAPbin itself, giving (20) (See PDF) . Representation of (x ,[smacr ]) in terms of eGAPbin cannot have a smaller value than (x ',[stilde ]), as otherwise this solution would have been found with eGAPbin as well, therefore (21) (See PDF) is valid.
Apparently, regarding requirement (ii) we get: (Formula Omitted: See PDF) (Formula Omitted: See PDF)
As we can see from Theorem 1, we can always obtain an upper bound (and a corresponding heuristic solution) which is arbitrarily close to the optimal objective value by increasing p . Obviously, if all coefficients of the jobs in the capacity constraints are integer numbers, the adjustment will only take on integer numbers, too, according to Lemma 1. Thus, with p =0 our reformulation will always deliver an optimal solution. This leads to the open question whether we can choose p such that we always obtain an optimal solution by solving eGAPbin .
As the eGAPbin can be transferred to an equivalent standard GAP, all known efficient algorithms for the GAP can be applied one-to-one. Every feasible solution will be feasible to the underlying eGAP, too. Additionally, one can now use standard cover cuts to tighten the corresponding lp (linear problem) relaxation, see Wolsey (1998). In Figure 1 - See PDF, an extended capacity is given for an agent i ,i ∈M . Apparently, all original jobs, xij , can now be assigned to this specific agent. But regarding the original jobs and the pseudo-jobs, yiki , not all can be assigned to the agent at the same time and we get a typical knapsack constraint, which enables cover cuts.
Note, as only a lower bound is restrictive, the capacities can be extended arbitrarily and every assignment of jobs to agents is feasible. Thus there exists mn feasible assignments, which might result in high computation times, if heuristics based on local search are used for analyzing the solution space (see the Introduction section). [white square]
Capacity adjustment with lower and upper bounds
In this case, our reduction of the model formulation analog to the second section is not possible and we propose to use at once the binary coding of the adjustment. Thus, we obtain again a heuristic solution where the objective function value is an upper bound for the optimal one. As we have two restrictive bounds for the adjustment now, we have to focus on the appropriate choice of the eiki to ensure that ∑ki ∈Dieiki [< or =, slant]ui -li , with Di ={1,...,di }∀i ∈I , see the previous section and Büther and Briskorn (2007) for further details. The obtained model is a standard GAP, too, and the results of the previous section can be transferred analogously.
Capacity adjustment with two variables
This is a generalization of the eGAP, where we have two continuous variables si- and si+ for each capacity constraint. The first variable, si- ,i ∈M ,0[< or =, slant]si- [< or =, slant]ui- , is used for reducing the capacity, the second, si+ ,i ∈M ,0[< or =, slant]si+ [< or =, slant]ui+ , for extending it, respectively. Assume, we can use not needed capacities for other projects, thus each unit of unused capacity reduces the total cost, denoted by ci- ,ci- >0, while an extension still increases the cost, denoted by ci+ ,ci+ >0. The variables si+ are identical to si in the previous sections. Now, the double GAP, deGAP for short, can be expressed as follows: (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF)
We do not claim equality in (25) (See PDF) : if the upper bound ui '- , is restrictive for an arbitrary i '∈M , we might still use less capacity than bi -ui- at this agent i ' and 'waste' unused capacity. A real-life application for such a model might appear, if we try to solve problems at capital markets, where we can borrow or place money, respectively, with different interest rates.
Obviously, we have to ensure, that ci- <ci+ ∀i ∈M . Otherwise, we would be able to gain a profit by extending the capacity by s+ >0 and 'sell' this capacity at once, si- >0. One unit of this nonsensical adjustment would be stated by ci+ -ci- <0. From this it follows that only one of both variables for the capacity adjustment will be larger than zero. Therefore, an idea might be to split the problem into two subproblems for each agent, the first requiring si- [> or =, slanted]0⋀si+ =0 and the second requiring si- =0⋀si+ [> or =, slanted]0. Unfortunately, this results in 2M problems, each of them being an eGAP, which is known to be NP-hard.
Note, if ci- =ci+ ∀i ∈M , we get the pure eGAP with si bounded by -ui- [< or =, slant]si [< or =, slant]ui+ ∀i ∈M , see the previous section. Therefore, we showed that the binary coding of the adjustment is an appropriate technique. If ci- ≠ci+ ∀i ∈M , we propose to use the binary coding, too. First, we replace si+ by si+ =ui+ -[smacr ]i+ ∀i ∈M , see Nauss (2004), where [smacr ]i+ is limited by 0[< or =, slant][smacr ]i+ [< or =, slant]ui+ ∀i ∈M . Now, all continuous variables [smacr ]i+ as well as si- are substituted by binary coding: [stilde ]i+ =∑ki ∈Di+eiki+yiki+ , with Di+ ={1,...,di+ } and [stilde ]i- =∑ki ∈Di-eiki-yiki- with Di- ={1,...,di- },i ∈M . Here again, the determination of the needed values has to ensure requirements (i)-(iii) as before. Now, the problem can be stated as: (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF) (Formula Omitted: See PDF)
This again can be transferred to an equivalent standard GAP, which enables us to use standard approaches.
By adding several additional variables for extending or reducing the capacity with different cost and profits, we can model a cascaded cost function for the adjustment. Of course the values for the cost have to be chosen such that a generation of profit by buying and selling capacities at the same time is prevented. For such a problem formulation, of course, our binary coding can be used, too. A real life example can be found at capital markets, where the interest rate increases if the budget is extended more than a fixed value as the risk increases.
If we aim for a balanced capacity consumption for all agents, we get a modification of the deGAP. Now, each unit of unused capacity si- , for each i ∈M , causes costs. Thus, we strictly need equality in constraint (25) (See PDF) , as otherwise si- would always be at its lower bound 0,∀i ∈M . A related work can be found in Nauss (2004). Certainly, for this problem, the binary coding of the adjustments can be used, too. But, due to equality in constraint (25) (See PDF) , we do not get a standard GAP. To our knowledge, the best solution method so far is an improved branch-and-bound algorithm, developed by Nauss (2004). Note, even if standard lagrangean relaxations are used on (26) (See PDF) for obtaining lower bounds on the objective, the resulting problem cannot be decomposed into standard knapsack problems as can be done for the GAP, see for example Yagiura et al (2006). Due to equality in (25) (See PDF) , the algorithms, developed by Volgenant and Marsman (1998), Kaufman et al (1985) or Ram and Sarin (1988) have to be used, which are more expensive regarding the computational efforts than, for example, the combo algorithm for the standard knapsack problem by Martello et al (1999).
Computational results
After giving the theoretical background in the previous sections, we now present some computational results to show the efficiency of the developed reformulations with the binary coding of the adjustments. The algorithms are coded in C ++ and executed on an Intel Pentium D processor with 1 GB RAM and 2.80 GHz clockpulse using the operating system Windows XP. In order to compare the different formulations, continuous variables vs. binary representation, we used the commercial solver ILOG CPLEX 10.1 with standard settings.
There are five well-known types of benchmark instances for the GAP, called types A, B, C, D, and E, with up to 80 agents and 1600 jobs. In our work, altogether 57 instances have been tested, split into 6-6-15-15-15 according to types A-B-C-D-E. Therefore, aij are random integers from [1,100] and the cij are random integers from [1,1000], where these parameters are differently correlated corresponding to the underlying instance types, see Yagiura et al (2004) for more details. These instances have then been modified to eGAP according to Nauss (2004) and solved once with continuous variables (eGAP) and once with the binary coding (eGAPbin ). The adjustments are only limited by lower bounds li =0 for each i ∈M . We set three different time limits: 60 s, 600 s and 3600 s, respectively, to compare the obtained bounds for the objectives of the different formulations. Summarizing the used parameters we got a total amount of 342 problems.
The results for each problem can be found in Tables 1 (See PDF) , 2 (See PDF) , 3 (See PDF) , 4 (See PDF) , 5 (See PDF) , 6 (See PDF) , 7 (See PDF) , 8 (See PDF) , 9 (See PDF) , 10 (See PDF) , 11 (See PDF) , 12 (See PDF) .
Finally, we summarized these results in Tables 13 (See PDF) , 14 (See PDF) , 15 (See PDF) . Note, that all coefficients for the capacity consumption are integer numbers for these instances, thus solving the binary reformulation can always deliver an optimal solution.
In the detailed computational results, the first column specify the tested instance, thus the identifier can be split into: first the instance type, followed by two numbers representing the number of agents whereas the remaining numbers represent the amount of jobs. For the next columns, eGAP and eGAPbin identify which formulation has been used. The status is taken directly from CPLEX and expresses the solution status when CPLEX stops its calculations: 101 for optimality; 102 for optimality regarding the CPLEX defined minimum deviation between the lower and upper bound; 107 for exceeding the set time limit; and 109 for running out of memory. The next four columns refer to the best obtained lower and upper bound as well as the gap between both of them, where the % gap refer to the upper bound. The last two columns display the needed time and the number of nodes during the branch&bound algorithm of CPLEX.
For the summarizing Table 13 (See PDF) , the first column represents the instance type (A-B-C-D-E). For the next columns, eGAP and eGAPbin identify which formulation has been used. The values in the table give the percentage of the instances, that were solved to optimally by CPLEX. It is clear, that instances of type A are too easy and instances of type D too difficult to see any differences. But for all other types, the reformulation with binary coding of the adjustment is more often solved optimally.
Tables 14 (See PDF) and 15 (See PDF) compare the best obtained lower and upper bound, respectively, using the same instances as before. Therefore, the first column represents again the instance type. Then, for all time limits, there occur three columns: the first gives the percentage of identical bounds of both formulations. The second the percentage of instances, where the binary coding delivers a better bound and the third, where the original formulation provides the better bound. Regarding the results, we see again, that instances of type A are too easy. Note, for type A, although CPLEX has always proved optimality, the lower bound by eGAPbin are sometimes higher. This is founded on the stopping criteria of the branch and bound algorithm implemented in CPLEX, the corresponding upper bounds are always identical. For the other instance types, the binary coding clearly outperforms the equivalent formulation with continuous variables in generating lower bounds. The same yields mostly for the upper bounds, too, only for the instances of type D, both formulations seem to be equivalent.
Note, when analyzing the computational results of CPLEX in detail we recognized, that CPLEX ran in 51% out of memory, when a time limit of 3600 s and the continuous variables were used. For the binary representation, the same occurs in only 23% of the instances.
Knowing well, that the chosen instances are just a small sample, we notice:
For the given instances, the commercial server CPLEX can handle better the binary coding for producing lower bounds, probably due to the well implemented cut generation that strengthens the lp-relaxation.
Regarding the absolute values for the obtained lower bounds, they only increase a little bit for the binary coding, if the time limit increases. But for most difficult instances, the obtained lower bounds of the eGAPbin using 60 s are still better or equal than that obtained by eGAP using 3600 s.
All used instances of type A and B as well as some of type C and E are quite easy to solve given a time limit of 3600 s. Thus in future we will have to test different cost for the adjustment of the capacity and in general adjustments limited by lower as well as upper bounds.
Summarizing, although the binary representation offers more variables in general, the corresponding instances can be handled better by CPLEX regarding the computation times and obtained lower as well as upper bounds.
Conclusions and outlook
In our previous work, see Büther and Briskorn (2007), we proposed some useful reductions of the KPC to a standard knapsack problem, among others by the binary representation of the continuous variable. In this work, we transferred this concept to the eGAP. Unfortunately, in general, it was not possible to determine an optimal solution straightforwardly, like it could be done for the KPC in some special cases, see Büther and Briskorn (2007). But in most cases we were able to prove, that the resulting problem can be transferred to an equivalent standard GAP. This is quite easy at the cost of getting only a heuristic solution, which has an objective function value theoretically arbitrary near to the optimal one. Note that the concept of replacing the continuous variables could be applied to all occurring classes of the eGAP in this work. As this is a general approach, we propose to transfer the concept to further mixed integer programs.
[1] Christian-Albrechts-Universität zu Kiel, Kiel, Germany
Correspondence : M Büther, Institut für Betriebswirtschaftslehre, Christian-Albrechts-Universität zu Kiel, Olshausenstraße 40, 24098 Kiel, Germany
Alfandari L, Plateau A and Tolla P (2001). A two-phase algorithm for the generalized assignment problem. Technical Report. Proceedings of the 4th Metaheuristics International Conference , Porto, July, pp 175-179.
Büther M and Briskorn D (2007). Reducing the 0-1 knapsack problem with a single continuous variable to the standard 0-1 knapsack problem . Technical Report. Christian-Albrechts-Universität at Kiel. September.
Ceria S, Cordier C, Marchand H and Wolsey LA (1998). Cutting planes for integer programs with general integer variables. Math Program 81 : 201-214.
Fügenschuh A and Martin A (2005). Computational integer programming and cutting planes. In: Aardal K, Nemhauser GL and Weismantel R (eds). Handbooks in Operations Research and Management Science Vol. 2 . Elsevier: New York, pp. 69-121.
Glover F, Laguna M and Martí R (2000). Fundamentals of scatter search and path relinking. Control Cybern 29 : 653-684.
Kaufman L, Plastria F and Tubeeckx S (1985). Zero-one knapsack problem with equality constraint. Eur J Opl Res 19 : 384-389.
Laguna M and Martí R (2003). Scatter Search--Methodology and Implementations C . Kluwer Academic Publishers. Springer: Berlin.
Martello S, Pisinger D and Toth P (1999). Dynamic programming and strong bounds for the 0-1 knapsack problem. Mngt Sci 45 : 414-424.
Müller-Bungart M (2007). Revenue Management with Flexible Products . Lect Notes Econ Math , 596, Springer: Berlin.
Nauss RM (2003). Solving the generalized assignment problem: An optimizing and heuristic approach. INFORMS J Comput 15 : 249-266.
Nauss RM (2004). The elastic generalized assignment problem. J Opl Res Soc 55 : 1333-1341.
Nemhauser GL and Wolsey L (1999). Integer and Combinatorial Optimization . Wiley: New York.
Ram B and Sarin S (1988). A algortihm for the 0-1 equality knapsack problem. J Opl Res Soc 39 : 1045-1049.
Schrijver A (2003). Combinatorial Optimization--Polyhedra and Efficiency . Springer: Berlin.
Volgenant A and Marsman S (1998). A core approach to the 0-1 equality knapsack problem. J Opl Res Soc 49 : 86-92.
Wolsey LA (2003). Strong formulations for mixed integer programs: Valid inequalities and extended formulations. Math Program 97 : 423-447.
Wolsey LA (1998). Integer Programming . Wiley: New York.
Yagiura M, Ibaraki T and Glover F (2002). A path relinking approach for the generalized assignment problem. Technical Report. Proceedings of the International Symposium on Scheduling , Japan, pp 105-108.
Yagiura M, Ibaraki T and Glover F (2004). An ejection chain approach for the generalized assignment problem. INFORMS J Comput 16 : 133-151.
Yagiura M, Ibaraki T and Glover F (2006). A path relinking approach for the generalized assignment problem. Eur J Opl Res 169 : 548-569.
© Operational Research Society 2010