1. Introduction
An assignment problem can be interpreted as a problem of resource allocation in which a certain number of “tasks” are performed by some “agents” so that the total cost of pairing tasks with agents is minimized. There are many ways to constrain this problem, such as imposing one-to-one mapping (the balanced problem), searching for a specific number of assignments (the k-cardinality assignment problem), imposing constraints on the resources (the generalized assignment problem), or allowing agents to perform multiple tasks or reversely having some tasks performed by more than one agent. For reviews on the variety of assignment problems and the algorithms that have been proposed to solve them, we refer the reader to Refs. [1,2,3] and references therein. Assignment problems are fundamental combinatorial optimization problems. As such, solving such problems is a significant concern in operational research, economics, and data sciences, among other disciplines. Assignment problems have been and continue to be a focal point of investigation for mathematicians, statisticians, computer scientists, and even physicists. In this paper, we propose a general framework that regroups all the assignment problems mentioned above. We develop an approximate solution to this problem based on statistical physics and prove that it converges efficiently to an exact solution for non-degenerate as well as degenerate assignment problems.
Let us first define the assignment problem. Let T be the set of tasks and A the set of agents, with possibly different cardinalities and . If we define as the cost of assigning task i to the agent j, the balanced assignment problem can be stated as finding a transportation matrix G between T and A whose elements are non-negative integer numbers and that minimizes
(1)
In the special case of , with a one-to-one assignment of tasks to agents (the balanced assignment problem), G corresponds to a permutation of . In the more general case, and can be different (unbalanced assignment), the number of pairings k may be imposed with (k-cardinality assignment problem), and/or tasks and agents may be assigned to more than one agents and tasks, respectively. In those general cases, G remains binary but may not be a permutation matrix (in the specific case of the k-cardinality problem, for example, G is a partial permutation matrix of rank k).
As mentioned above, assignment problems are basically combinatorial optimization problems that can be solved numerically using a linear programming model or, more specifically, a linear integer programming method. Those methods, however, are theoretically -complete. There are fast heuristics that can solve such problems, such as Dantzig’s simplex method [4]. The assignment problem, however, is a specialized case of integer linear programming that can be recast and solved by an exact polynomial algorithm. For example, the most common algorithm for solving the balanced assignment problem has its origin in the work of Jacobi [5]. It was “rediscovered” 60 years later by [6] and is now dubbed the Hungarian algorithm. It is a global algorithm that iteratively identifies assignments between agents and tasks. It is polynomial in running time ( or depending on the implementation). While it is serial by nature, there are many recent efforts to parallelize this algorithm (see, for example [7,8,9], and references therein).
The Hungarian algorithm can be adapted to many unbalanced assignment problems. If there are tasks and agents, with , for example, it is possible to add “dummy” agents, solve the balanced assignment problem of size , and then only retain the assignments to the actual agents. If each agent can perform up to M tasks, the textbook solution is then to create M copies of each agent prior to using the Hungarian algorithm. This approach, however, may not lead to the expected solution. Consider a problem with fiv tasks and three agents, with each agent allowed to perform up to three tasks. The option of creating three copies of each agent and then solving the corresponding unbalanced assignment problem may lead to one agent performing two tasks, a second agent performing three tasks, and the last agent not performing any task, possibly violating a constraint that each agent needs to perform at least one task. Several methods have been proposed to circumvent this problem, either by dividing the problem into multiple subproblems [10,11] or by using random algorithms such as the ant colony algorithms [12,13]. It is unclear whether these algorithms can identify the optimal solution (they have been shown to fail in some cases, see for example [14]) or how they scale for large problems.
In this paper, we propose a radically different approach to solving a general class of assignment problems using continuous systems. Our approach is motivated by statistical physics. It is a full generalization of the method developed in a previous paper to solve balanced assignment problems [15]. In the following, we will use the generic term “multi-assignment” to represent an assignment problem with agents possibly performing multiple tasks, or with tasks assigned to multiple agents. In this paper, we aim to
-
Introduce and substantiate a continuous framework for addressing potentially multifaceted assignment problems involving multiple tasks and/or multiple agents, leveraging principles from statistical physics;
-
Demonstrate that, under generic circumstances where the multi-assignment problem possesses a unique solution, the aforementioned framework ensures convergence to that solution with arbitrarily high accuracy, in terms of both cost (referred to as energy) and assignment matrix,
-
Present a modified approach capable of identifying at least one solution for degenerate multi-assignment problems that feature multiple solutions;
-
Demonstrate that the implementation of this framework can be efficiently parallelized.
We emphasize that this formalism is not a mere adaptation but a full generalization of the framework we develop for solving assignment problems [16]. In particular, new features of this paper are as follows:
A method to account for the fact that each task can be assigned to a variable number of agents and, on the other hand, that each agent can be assigned a variable number of tasks. To that end, we introduce indicator functions over the sets of tasks and agents that are optimized along with the transportation plan.
The establishment of proofs of validity and convergence of our algorithm for those general multi-assignment problems. The main results are provided in the text, while the proofs themselves are relegated to the Appendices for more clarity. For the balanced assignment problem, these proofs rely heavily on the fact that the corresponding transportation matrices are permutation matrices that are the extreme points of the well-characterized convex set of doubly stochastic matrices (according to the Birkhoff–von Neumann theorem [17,18]). For more general multi-assignment problems, the transportation matrices also belong to a convex set, with properties akin to the Birkhoff–von Neumann theorem (these properties are discussed in Appendix A). The use of these properties to derive the convergence of our algorithm for solving multi-assignment problems is new.
The paper is organized as follows. In Section 2 Section 3 and Section 4, we describe in detail the general framework we propose for solving multi-assignment problems. Proofs of all important properties of this framework are provided in the appendices. Section 5 briefly describes the implementation of the method in a C++ program, Matching V1.0. In Section 6, we present some applications, along with a comparison to current algorithms.
2. A General Formulation of the Assignment Problem
We consider two sets of points, and , with cardinalities and , respectively. We represent the cost of transportation between and as a matrix C whose elements are positive, for all . We set the number of assignments between points in and to be a strictly positive integer number k, a constant given as input to the problem. A point i is assigned to points in , with belonging to , where and are non-negative integers satisfying . Similarly, a point j is assigned to points in , with belonging to , where and are non-negative integers integers satisfying . Using the traditional “task-agent” for the formulation of an assignment problem, represents the tasks, and the agents. is the number of agents needed to perform task i, while is the number of tasks that can be performed by agent j. and represent the minimum and maximum number of agents that are assigned to a task i, respectively, while and represent the minimum and maximum number of tasks that can be assigned to an agent j, respectively. We use a general definition of these boundaries, in that we allow each task and each agent to have their own limits. The multi-assignment problem is then framed as the search for an assignment matrix G that defines the correspondence between points in and points in . This matrix is found by minimizing the assignment cost U, defined as
(2)
The summations encompass all i in and j in . The objective is to locate the minimum of U given the values of that adhere to the following constraints:
(3)
where k is the actual number of task–agent pairs. The numbers , , , , and k are problem-specific and given. They do satisfy some constraints, such as and , as described above, as well as and . We store those numbers in four vectors, , , , .Equation (3) recap most of the standard linear assignment problems:
(i). If and for all i and j, we recover the balanced assignment problem,
(ii). If and for all i and j, the equations correspond to the k-cardinality assignment problem [19,20,21].
In Equation (3), , and are unknown, defining a total of variables. The solution to the general assignment problem is given by the functions and on and , respectively, which identify which tasks and which agents are involved in the optimal assignment, the transportation matrix that defines these correspondences, and the minimum assignment cost .
Optimizing Equation (2) subject to the constraints Equation (3) constitutes a discrete optimization challenge, specifically an integer linear programming problem. To address this challenge, we employ a statistical physics methodology, transforming it into a temperature-dependent problem involving real variables. The integer optimal solution is then obtained as the temperature approaches zero.
2.1. Effective Free Energy for the General Assignment Problem
Solving the multi-assignment problem entails determining the minimum of the function defined in Equation (2) across the potential mappings between the two discrete sets of points under consideration. Identifying this minimum is tantamount to identifying the most probable state of the system it characterizes. This “system” encompasses the various binary transportation plans, also referred to as assignment matrices, between and that adhere to the constraints Equation (3). Each state within this system corresponds to assignment matrix G with its associated energy as defined in Equation (2). The probability linked with an assignment matrix G is given by
(4)
In this equation, is the inverse temperature, namely , where is the Boltzmann constant and T is the temperature, and is the partition function computed over all states of the system. As this system is defined by the assignment matrices G and by the marginals of G, , and , the partition function is equal to
(5)
where is the free energy of the system. The practicality of this free energy is limited due to the inability to compute it explicitly. We develop a method for approximating it through the saddle point approximation technique.Taking into account the constraints in Equation (3), the partition function can be written as
(6)
The three sums impose that , , and take integer values within some ranges defined by the constraints Equation (3). The additional constraints are imposed through the delta functions. We employ the Fourier representation of those delta functions, thereby introducing additional auxiliary variables x, , and , with and . The partition function is then given, after reorganization, by (up to a multiplicative constant) by
(7)
where is the imaginary unit (). Note that we have rescaled the variables x, , and by a factor for consistency with the energy term. Conducting the summations over the variables , , and , we get where is a functional, or effective free energy, that depends on the variables , , and x and is defined by(8)
In contrast to the expression of the internal energy U defined in Equation (2) that depends on the constrained binary variables , , and , the expression for the effective free energy depends only on unconstrained variables, namely , , and x. Below, we demonstrate how identifying the extremum of this function enables us to address the multi-assignment problem.
2.2. Optimizing the Effective Free Energy
Let , , and represent the expected values of , , and , respectively, in accordance with the Gibbs distribution specified in Equation (4). Computing these expected values directly is unfortunately not feasible as the partition function defined in Equation (8) lacks an analytical expression. Instead, we derive a saddle point approximation (SPA) by seeking extrema of the effective free energy with respect to the variables , , and x:
(9)
After some rearrangements, these two equations can be written as
(10a)
(10b)
(10c)
where(11)
andNote that
-
(i). . If and , we recover Equation (11) from [16] corresponding to the balanced assignment problem.
-
(ii). . In this case, Equation (11) refers to the k-cardinality problem.
As frequently encountered, the saddle point may be purely imaginary. In this instance, it is evident from Equations (10) and (11) that the variables , , and must be real. Consequently, we will replace by below.
In order to analyze the saddle point approximation, SPA, it is necessary to verify the existence and evaluate the uniqueness of the extrema of the effective free energy. The following theorem proves that is strictly concave.
The Hessian of the effective free energy is negative definite if for all i and , for all j, and negative semidefinite otherwise. The free energy function is then strictly concave in the first case and concave in the second case.
See Appendix B. □
The following property establishes a relationship between the solutions of the SPA system of equations and the expected values for the assignment matrix and for the indicator functions:
Let be the expected state of the system at the temperature β with respect to the Gibbs distribution given in Equation (4). is associated with an expected transportation plan and expected indicator functions and . Let , , and be the solutions of the system of Equation (10). Then the following identities hold:
(12)
To indicate that the solutions are mean field solutions, we use the superscript .
The proof of this proposition is virtually identical to the proof of the same results for the assignment problem, found in Appendix B of [15]. □
For a given value of the parameter , and are the numbers of elements of and the number of elements in that are in correspondence with i and j, respectively, and forms a transportation plan between and that is optimal with respect to the free energy defined in Equation (8). Note that these values are mean values and fractional. The matrix belongs to a polytope, which we define as (see Appendix A for a full characterization of this polytope). We can link the mean field assignment matrix to an optimal free energy and to an optimal internal energy . These values serve as mean field approximations of the exact free energy and internal energy of the system, respectively. Here are the key properties of and :
and are monotonic increasing and monotonic decreasing functions, respectively, of the parameter β.
See Appendix C for and Appendix D for . □
Theorem 1 and Proposition 2 outlined above underscore several advantages of the proposed framework, which reframes the multi-assignment problem as a temperature-dependent process. Firstly, at each temperature, the multi-assignment problem with a generic cost matrix is transformed into a concave problem with a unique solution. This problem exhibits linear complexity in the number of variables, contrasting with the quadratic complexity of the original problem. The concavity facilitates the utilization of straightforward algorithms for identifying a minimum of the effective free energy function (Equation (8)). Additionally, Equation (12) offers robust numerical stability for computing the transportation plan and the functions and , owing to the characteristics of the functions and . Lastly, the convergence with respect to temperature is monotonic.
3. Solving Generic Assignment Problems
In the previous section, we proposed an effective free energy, , that depends on unconstrained variables and on the inverse of the temperature, , whose extremum identifies the solution of a possibly multi-job, multi-agent assignment problem. We have shown that the trajectory of this extremum is monotonic, increasing with respect to . We now relate these values to the optimal assignment energy :
Let and be the mean field approximations of the exact free energy and internal energy of the system at the inverse temperature β. The optimal assignment energy is then given by,
(13)
namely, the optimal assignment energy, , is equal to the infinite limit with respect to the inverse temperature of both the mean field free energy and the internal energy.See Appendix E. □
This theorem illustrates that as the inverse temperature approaches infinity (or equivalently, as the temperature tends towards zero), the internal energy and free energy of the system converge to the optimal assignment energy. This confirms the validity of our statistical physics approach, particularly emphasizing the effectiveness of the saddle-point approximation. Note, however, that it does not define the behavior of the coupling matrix . As and , the coupling matrix at any finite temperature is fractional. We need to show that as , the corresponding matrix does converge to the solution matrix and not to a fractional matrix that would lead to the same low energy .
We first establish bounds on the internal energy and free energy at the SPA. Let us define
(14)
and , the mean field approximations of the exact free energy and internal energy of the system at the inverse temperature β, satisfy the following inequalities:
(15)
(16)
where is the optimal assignment energy and is defined in Equation (14).
See Appendix F. □
Now, let us assume that the multi-assignment problem linked with the cost matrix C possesses a unique optimal assignment matrix, denoted as . We have the following theorem:
Let be the coupling matrix solution of the SPA equations at the inverse temperature β. We assume that the multi-assignment problem has a unique solution, with being the associated coupling matrix. If Δ is the difference in total cost between the optimal solution and the second-best solution, then
(17)
See Appendix G. □
This theorem validates that, in the generic case in which the solution to the assignment problem is unique, the converged solution matrix is this unique solution.
4. Solving Degenerate Assignment Problems
The method we propose is basically a relaxation approach to the general assignment problem. Indeed, we build a collection of transportation matrices that belong to (see Appendix A). Entries of these matrices are non-integer, strictly in the interval . If the general assignment problem is known to have a unique integer solution, we have shown that these matrices converge to an element of when . The question remains as to what happens when the problem is degenerate, i.e., when it may have multiple integer solutions.
The general assignment problem is a linear programming problem. Checking if such a problem is degenerate is unfortunately often -complete ([22,23]). The degeneracies occur due to the presence of cycles in the linear constraints, i.e., in the cost matrix for the assignment problem. If this is the case, we propose randomly perturbing that matrix to bring it back to the generic problem. Megiddo and colleagues [24] have shown that an -perturbation of a degenerate linear programming problem reduces this problem to a non-degenerate one.
5. Matching: A Program for Solving Assignment Problems
We have implemented the multi-assignment framework described here in a C++ program, Matching, that is succinctly described in Algorithm 1.
Matching relies on an iterative process in which the parameter is gradually increased. At each value of , the nonlinear system of equations defined by Equation (10) is solved. We write this system as
where is a vector of predicates defined asThis system has equations, with the same number of variables. It is solved using an iterative Newton–Raphson method (for details, see, for example [16,25]). Once the SPA system of equations is solved, the assignment matrix , the functions , , and the corresponding transportation energy are computed. The iterations over are stopped if the mean field energy does not change anymore (within a tolerance TOL usually set to ). Otherwise, is increased, and the current values of , and x are used as input for the following iteration. At convergence, the values of the assignment matrix are rounded to the nearest integer (indicated as in the output of Algorithm 1). The minimal energy is then computed using the corresponding integer matrix.
The primary computational expense of this algorithm arises from solving the nonlinear set of equations corresponding to the SPA at every value. We use for this purpose an iterative Newton–Raphson method (see [15]).
Algorithm 1 Matching: a temperature-dependent framework for solving the multi-assignment problem |
|
As for any annealing scheme, the initial temperature, or, in our case, the initial value , is a parameter that significantly impacts the efficiency of the algorithm. Setting to be too small (i.e., a high initial temperature) will lead to inefficiency as the system will spend a significant amount of time at high temperatures, while setting too high will require many steps to converge at the corresponding low temperature, thereby decreasing the efficiency brought about by annealing. The value of scales the cost matrix C and as such is related to the range of this matrix, more specifically to its largest value, . We found that setting provides satisfactory annealing efficiency. The value for was chosen empirically.
6. Examples
The framework we propose is general: it allows us to solve balanced and unbalanced assignment problems, including those that allow for multi-agent or multi-task assignments. We illustrate our method on the latter types of problems, as, to our knowledge, there are currently no solutions to those that are guaranteed to reach the optimal assignment.
As a first illustration of our framework, we ran Matching on the cost matrix given in Table 1. This matrix has been used in previous studies of multi-agent assignment problems [10,11,13,14,26,27]. Matching was run with the following parameters: for all agents (i.e., some agents may stay idle), for all agents (i.e., an agent can perform up to 4 tasks), (i.e., a task is only performed by one agent), and (all tasks are performed). In Figure 1, we show the corresponding trajectories of the internal energy and free energy .
As expected, the internal energy is monotonically decreasing while the free energy is monotonically increasing, and both converge to the same value, 1440. Theorem 3 provides bounds on those energy values. Note that it can be rewritten as
i.e., at each value of , we have bounds based on internal energy and free energy for the actual optimal cost of the assignment problem. These bounds are shown as shaded areas in Figure 1. Note that the widths of the intervals defined by the bounds are inversely proportional to and therefore decrease monotonically as increases.In Table 2, we compare the assignment found by Matching on with set to 4 (i.e., up to four tasks per agent), and (agents can be idle) or (each agent must execute at least one task). The removal of the latter constraint leads to a better optimal assignment in which agent 3 remains idle. When we maintain this constraint, our results are the same as those found by Wang et al. [13].
As a second more general test of our method, we compared the results of the framework we propose with the results of our own version of the Hungarian algorithm, as well as with those found in previous studies. We used both the matrix C1 and the matrix C2 given in Table 3 in those computing experiments.
We ran Matching as follows. For all tasks j, we set , i.e., a task is only performed by one agent, and all tasks are expected to be executed. The latter is further enforced by setting k to be the total number of tasks (8 for and 10 for ). For an agent i, we either set all to be 1 (in which case all agents must perform at least one task) or 0 (in which case some agents may not be assigned). We considered all to be equal to a given value p and ran Matching with p varying from 1 to k. Note that both matrices and contain cycles. As such, optimal assignments based on those matrices are not unique. To remove the degeneracy, we added random uniform noise between with to all values of the cost matrices (see Section 4 for details).
The Hungarian algorithm remains a standard for solving balanced as well as unbalanced assignment problems. It needs to be adapted, however, when solving multi-task assignment problems. The corresponding textbook solution that was used, for example, in Ref. [14] is to make copies or clones of the agents, solve the augmented unbalanced assignment problem using the Hungarian algorithm, and then regroup all clones of an agent to define the tasks that it is assigned to. We consequently created multiple versions of the cost matrices and , with all agents copied p times, where p defines the maximum number of tasks that an agent can perform. We ran our own version of the Hungarian algorithm on those matrices. This version is adapted from the serial code written by C. Ma and is publicly available at
Comparisons of the results of Matching, of the Hungarian algorithm, and of previous studies are provided in Table 4 for the cost matrix C1 and in Table 5 for the cost matrix C2.
The results of Matching have been proven to be optimal (see Section 3). As expected, those results are strongly dependent on the constraints imposed on the problem: we can find assignments with a lower total cost if we allow agents to remain idle by setting to 0. For example, for the cost matrix , in the extreme case with and for all agents, we find that all eight tasks have been assigned to agent 5, while the other agents have no task to perform, with a total cost of 1400. If instead we set and keep , we find that agent 5 is assigned tasks 1, 2, 5, and 6; agent 1 is assigned task 3; agent 2 is assigned task 8; agent 3 is assigned task 4; and agent 4 is assigned task 7, for a total cost of 1450. If we instead run the Hungarian algorithm with agents that have been cloned to allow for multi-task assignments, we find optimal assignments that match those found by Matching with set to 0 for both cost matrices. For matrix , for example, if each agent is represented with 8 clones, we find that agent 5 is assigned tasks 1, 2, 4, 5, 6, 7, and 8 while agent 1 is assigned task 3, for a total cost of 1400. This illustrates two points. First, as expected, the optimal assignment is not unique, as we find two different assignments with the same cost, 1400. Second, and more importantly, this shows the difficulty of applying the Hungarian algorithm for such problems. It works fine if we want to find the optimal assignment without consideration of constraints, such as the constraint each agent needs to perform at least one task, but does not fit if such a constraint is necessary.
The optimal solutions found by Matching when is set to 1 match those found by Wang et al. [13] for both matrices and are better than those found by Majumdar et al. [26] for matrix . Note that the latter are obtained using either an ant colony algorithm or a genetic algorithm. Both are probabilistic algorithms that do not guarantee that the true minimum is found.
7. Conclusions
We have developed a general framework for solving balanced and unbalanced and constrained and unconstrained assignment problems. Given two sets of points and with the possibly different cardinalities and , constraints on the number of possible assignments for each element of and of , and a cost matrix defining the penalties of on assignment, we have constructed a concave free energy parameterized by temperature that captures those constraints. The definition of this free energy is general enough that it includes balanced and unbalanced cases. Its extremum establishes an optimal assignment between the two sets of points. We have demonstrated that this free energy consistently decreases as a function of (the inverse of temperature) towards the optimal assignment cost. Moreover, we have established that for sufficiently large values, the precise solution to the generic multi-assignment problem can be directly derived through straightforward rounding to the nearest integer of the elements of the assignment matrix linked to this extremum. Additionally, we have developed a method that guarantees convergence for addressing degenerate assignment problems.
The formalism introduced in this paper was designed to generalize the Hungarian algorithm for a large range of assignment problems. We expect it to be useful for a much larger set of problems, especially those associated with graphs. Graphs capture data relationships and, as such, are essential to numerous applications. They are particularly useful in domains such as Web search [28], neural [29] and social network analysis [30], gene networks [31], etc. More generally, they are designed to represent complex knowledge. The scale of modern graphs leads to a need to develop more efficient methods to process very large graphs. The formalism we propose is well adapted to tackle this problem for applications such as bipartite or simple graph matching. We will also consider extensions to higher-order matching problems, such as k-matching problems [32] (for example the three-assignment problem [33]) that are known to be -complete [34]. These problems have their own applications for graph analyses.
Conceptualization, P.K. and H.O.; methodology, P.K. and H.O.; software, P.K.; validation, P.K.; formal analysis, P.K. and H.O.; investigation, P.K. and H.O.; writing—original draft preparation, P.K. and H.O.; writing—review and editing, P.K. and H.O. All authors have read and agreed to the published version of the manuscript.
No new data were created for this study.
The work discussed here originated from visits by P.K. at the Institut de Physique Théorique, CEA Saclay, France, during the falls of 2022 and 2023. He thanks them for their hospitality and financial support.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Convergence of the internal energy [Forumla omitted. See PDF.] (A) and free energy [Forumla omitted. See PDF.] (B) as a function of [Forumla omitted. See PDF.] when solving the multi-task assignment problem with the cost matrix [Forumla omitted. See PDF.], such that each agent can perform up to 4 tasks, may be idle, and all 8 tasks are performed. On both panels, we show the bounds on the expected value for the optimal cost [Forumla omitted. See PDF.] as shaded areas (see text for details).
The cost matrix
Tasks | ||||||||
---|---|---|---|---|---|---|---|---|
Agents | | | | | | | | |
| 300 | 250 | 180 | 320 | 270 | 190 | 220 | 260 |
| 290 | 310 | 190 | 180 | 210 | 200 | 300 | 190 |
| 280 | 290 | 300 | 190 | 190 | 220 | 230 | 260 |
| 290 | 300 | 190 | 240 | 250 | 190 | 180 | 210 |
| 210 | 200 | 180 | 170 | 160 | 140 | 160 | 180 |
Assignments for
| | Wang a | |
---|---|---|---|
| | | |
| | | |
| | | |
| | | |
| | | |
Total cost | 1440 | 1450 | 1450 |
a Assignment based on an ant colony algorithm [
The cost matrix
Tasks | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Agents | | | | | | | | | | |
| 10 | 2 | 14 | 9 | 6 | 7 | 21 | 32 | 18 | 11 |
| 7 | 12 | 9 | 3 | 5 | 6 | 9 | 16 | 54 | 12 |
| 4 | 8 | 6 | 12 | 21 | 9 | 21 | 14 | 45 | 13 |
| 21 | 9 | 12 | 9 | 32 | 10 | 19 | 25 | 16 | 10 |
| 10 | 12 | 30 | 15 | 12 | 17 | 30 | 12 | 12 | 9 |
| 15 | 7 | 34 | 17 | 7 | 16 | 14 | 17 | 9 | 5 |
Solving the multi-task assignment problems defined by the cost matrix
Method | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|
Matching, | 1520 | 1470 | 1450 | 1450 | 1450 | 1450 | 1450 |
Matching, | 1520 | 1470 | 1440 | 1420 | 1410 | 1400 | 1400 |
Hungarian | 1520 | 1470 | 1440 | 1420 | 1410 | 1400 | 1400 |
Majumdar GA b | 1520 | 1470 | 1450 | NA c | NA | NA | NA |
Wang AC d | 1520 | 1470 | 1450 | NA | NA | NA | NA |
a Each agent is allowed to execute up to p tasks. Note that as there are 8 tasks and 5 agents only, the minimum value of p is 2 if all the tasks must be executed. b Genetic algorithm [
Solving the multi-task assignment problems defined by the cost matrix
Method | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|
Matching, | 66 | 65 | 65 | 65 | 65 | 65 | 65 |
Matching, | 66 | 62 | 61 | 61 | 61 | 61 | 61 |
Hungarian | 66 | 62 | 61 | 61 | 61 | 61 | 61 |
Majumdar GA b | 66 | 67 | 66 | 74 | NA c | NA | NA |
Wang AC d | 66 | 65 | 65 | NA | NA | NA | NA |
a Each agent is allowed to execute up to p tasks. Note that as there are 10 tasks and 6 agents only, and the minimum value of p is 2 if all the tasks must be executed. b Genetic algorithm [
Appendix A. A Generalized Birkhoff–von Neumann Theorem
An
An
where
Doubly stochastic matrices and their properties associated with the Birkhoff–von Neumann theorem above proved useful to establish convergence properties of statistical physics frameworks for solving the balanced assignment problem [
For the more general assignment problem considered here, however, we need to consider a different but related set of matrices, defined as follows. Let N and M be two natural numbers. Let
Let A be a non-negative matrix of size
The transportation polytope
Denote
Relaxed solutions to the general assignment problem considered in this paper belong to
Theorem A2 can be stated as any matrix in
Remark A1.
-
If
, (where and are vectors of one of size N and M, respectively, , is the set of doubly stochastic matrices, is the set of permutation matrices, and Theorem A2 is then equivalent to the Birkhoff–von Neumann theorem for doubly stochastic matrices [ 17 ,18 ]. -
If
, , , , is the set of doubly sub-stochastic matrices with sum k and is the set of sub-permutation matrices of rank k; a specific version of Theorem A2 was established by Mendelssohn and Dulmage for square matrices ([ 37 ]), and later by Brualdi and Lee for rectangular matrices ([38 ]).
Appendix B. Proof of Theorem 1: Concavity of the Effective Free Energy
The free energy associated with the multi-assignment problem can be written as
We prove that this free energy is weakly concave by showing that its Hessian
In [
In
We define the matrix
From Equation (10), we obtain
Let
As
To check for definitiveness, let us note first that as
For the two other terms, we consider two cases:
and . Then and . This leads to : Equations (
A3 ) and (A4 ) are satisfied when all, , and when are zero, namely that . Therefore in this case, H is negative, definite, and the free energy is strictly concave. For all i
or for all j . Then, either or , or both. There are then non-zero solutions to the equation . The Hessian is then only semidefinite.
Appendix C. Monotonicity of
The effective free energy
The effective free energy is then
As written above,
Let
Appendix D. Monotonicity of
Let
Before computing
The proof of this proposition is virtually identical to the proof of the same results for the assignment problem, found in Appendix C of [
Based on the chain rule,
Using Proposition A1, we find that the partial derivatives of
Using again Proposition A1, we obtain
Using Equation (
Note that
Appendix E. Proof of Theorem 2: Convergence of the Mean Field Free Energy and the Internal Energy to the Optimal Assignment Cost
For simplicity in notation, we define
Appendix E.1. Defining Entropy
Recall that we have defined
The free energy is given by
By adding and subsequently subtracting the internal energy in the equation defining the free energy, and then replacing
Then,
The form of the free energy given in Equation (
Let us find the bound for the entropy term.
Then, the entropy satisfies the following constraints:
Appendix E.2. FMF (∞) = UMF (∞)
Using Equation (
Taking the limits when
□
Appendix E.3. U * ≤ FMF (∞)
Let
As
Therefore
Appendix E.4. U * ≥ FMF (∞)
Let us first recall the definition of the free energy,
Note these two properties of limits:
Let us consider now a matrix
As
We multiply this equation separately for positive and negative
Therefore,
Similarly, we can show that
Combining Equations (
Equation (
As this equation is true for all
□
We have shown that
Appendix F. Proof of Theorem 3: Bounds on the Internal Energy and Free Energy
Appendix F.1. Bounds on the Free Energy
Proposition A1 from
Using this equation and the relationship between free energy, energy, and entropy at SPA (see Equation (
From the bounds on the entropy (Equation (
By integrating over
Finally, as
Appendix F.2. Bounds on the Energy
As
In addition,
Appendix G. Proof of Theorem 4: Bounds on the Transportation Matrix
This proof is inspired by the proof of Theorem 6 in Appendix 2 of [
We first recall that
We assume that the general assignment problem considered has a unique solution. We want to prove that
Let us denote
In the first case,
Since
In the second case,
Again, as
In conclusion, in both cases, we have
Now, let us look at the energy associated with
In Theorem 3, we have shown that
Therefore,
We have shown that
Appendix H. Properties of the General Function G(x,a,b) and Its Derivatives
Let
where a and b are two positive integers with
-
. -
. -
.
Note that in Theorem A3, we consider a, b non-negative, with
Appendix H.1. The Function G(x,a,b)
Note first that
Appendix H.2. g(x,a,b) and Its Derivative g ′ (x,a,b)
Note that
We have
Figure A1. Examples of [Forumla omitted. See PDF.] (A) and its derivative [Forumla omitted. See PDF.] (B) for [Forumla omitted. See PDF.] (black), [Forumla omitted. See PDF.], (red), and [Forumla omitted. See PDF.] (blue).
We now prove property 1 of Theorem A3.
Let us define
From this formulation, it is clear that
As
Appendix H.3. The Function l(x,a,b) = G(x,a,b)-xg(x,a,b)
Recall that
We nowprove property 2 of Theorem A3.
We prove first that
Therefore,
Let now x be a positive real number. Then,
We know that
Appendix H.4. The Function s(x,a,b) Is Negative
We now prove the property 3 of A3.
Recall that
Therefore,
We know that
References
1. Dell’Amico, M.; Toth, P. Algorithms and codes for dense assignment problems: The state of the art. Discret. Appl. Math.; 2000; 100, pp. 17-48. [DOI: https://dx.doi.org/10.1016/S0166-218X(99)00172-9]
2. Pentico, D.W. Assignment problems: A golden anniversary survey. Eur. J. Oper. Res.; 2007; 176, pp. 774-793. [DOI: https://dx.doi.org/10.1016/j.ejor.2005.09.014]
3. Burkard, R.; Dell’Amico, M.; Martello, S. Assignment Problems; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2009.
4. Dantzig, G. Origins of the simplex method. A History of Scientific Computing; Association for Computing Machinery: New York, NY, USA, 1990; pp. 141-151.
5. Jacobi, C. De investigando ordine systematis aequationum differentialum vulgarium cujuscunque. J. Reine Angew. Math.; 1890; 94, pp. 292-320.
6. Kuhn, H. The Hungarian method for the assignment problem. Nav. Res. Logist.; 1955; 2, pp. 83-97. [DOI: https://dx.doi.org/10.1002/nav.3800020109]
7. Date, K.; Nagi, R. GPU-accelerated Hungarian algorithms for the Linear Assignment Problem. Parallel Comput.; 2016; 57, pp. 52-72. [DOI: https://dx.doi.org/10.1016/j.parco.2016.05.012]
8. Lopes, P.A.; Yadav, S.S.; Ilic, A.; Patra, S.K. Fast block distributed CUDA implementation of the Hungarian algorithm. J. Parallel Distrib. Comput.; 2019; 130, pp. 50-62. [DOI: https://dx.doi.org/10.1016/j.jpdc.2019.03.014]
9. Yadav, S.S.; Lopes, P.A.C.; Ilic, A.; Patra, S.K. Hungarian algorithm for subcarrier assignment problem using GPU and CUDA. Int. J. Commun. Syst.; 2019; 32, e3884. [DOI: https://dx.doi.org/10.1002/dac.3884]
10. Kumar, A. A modified method for solving the unbalanced assignment problems. Appl. Math. Comput.; 2006; 176, pp. 76-82. [DOI: https://dx.doi.org/10.1016/j.amc.2005.09.056]
11. Yadaiah, V.; Haragopal, V. A New Approach of Solving Single Objective Unbalanced Assignment Problem. Am. J. Oper. Res.; 2016; 6, pp. 81-89. [DOI: https://dx.doi.org/10.4236/ajor.2016.61011]
12. Costa, D.; Hertz, A. Ants can color graphs. J. Oper. Res. Soc.; 1997; 48, pp. 295-305. [DOI: https://dx.doi.org/10.1057/palgrave.jors.2600357]
13. Wang, L.; He, Z.; Liu, C.; Chen, Q. Graph based twin cost matrices for unbalanced assignment problem with improved ant colony algorithm. Results Appl. Math.; 2021; 12, 100207. [DOI: https://dx.doi.org/10.1016/j.rinam.2021.100207]
14. Betts, N.; Vasko, F.J. Solving the unbalanced assignment problem: Simpler is better. Am. J. Oper. Res.; 2016; 6–9, 296. [DOI: https://dx.doi.org/10.4236/ajor.2016.64028]
15. Koehl, P.; Orland, H. Fast computation of exact solutions of generic and degenerate assignment problems. Phys. Rev. E; 2021; 103, 042101. [DOI: https://dx.doi.org/10.1103/PhysRevE.103.042101] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34005932]
16. Koehl, P.; Delarue, M.; Orland, H. Physics approach to the variable-mass optimal-transport problem. Phys. Rev. E; 2021; 103, 012113. [DOI: https://dx.doi.org/10.1103/PhysRevE.103.012113]
17. Birkhoff, G. Tres observaciones sobre el algebra lineal. Univ. Nac. Tucuman. Ser. A; 1946; 5, pp. 147-154.
18. Von Neumann, J. A certain zero-sum two-person game equivalent to the optimal assignment problem. Contrib. Theory Games; 1953; 2, pp. 5-12.
19. Dell’Amico, M.; Martello, S. The k-cardinality assignment problem. Discret. Appl. Math.; 1997; 76, pp. 103-121. [DOI: https://dx.doi.org/10.1016/S0166-218X(97)00120-0]
20. Dell’Amico, M.; Lodi, A.; Martello, S. Efficient algorithms and codes for k-cardinality assignment problems. Discret. Appl. Math.; 2001; 110, pp. 25-40. [DOI: https://dx.doi.org/10.1016/S0166-218X(00)00301-2]
21. Volgenant, A. Solving the k-cardinality assignment problem by transformation. Eur. J. Oper. Res.; 2004; 157, pp. 322-331. [DOI: https://dx.doi.org/10.1016/S0377-2217(03)00205-4]
22. Chandrasekaran, R.; Kabadi, S.; Murty, K. Some NP-complete problems in linear programming. Oper. Res. Lett.; 1982; 1, pp. 101-104. [DOI: https://dx.doi.org/10.1016/0167-6377(82)90006-2]
23. Greenberg, H.J. An analysis of degeneracy. Nav. Res. Logist. Q.; 1986; 33, pp. 635-655. [DOI: https://dx.doi.org/10.1002/nav.3800330409]
24. Megiddo, N.; Chandrasekaran, R. On the ε-perturbation method for avoiding degeneracy. Oper. Res. Lett.; 1989; 8, pp. 305-308. [DOI: https://dx.doi.org/10.1016/0167-6377(89)90014-X]
25. Koehl, P.; Delarue, M.; Orland, H. Finite temperature optimal transport. Phys. Rev. E; 2019; 100, 013310. [DOI: https://dx.doi.org/10.1103/PhysRevE.100.013310] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31499816]
26. Majumdar, J.; Bhunia, A.K. An alternative approach for unbalanced assignment problem via genetic algorithm. Appl. Math. Comput.; 2012; 218, pp. 6934-6941. [DOI: https://dx.doi.org/10.1016/j.amc.2011.12.070]
27. Rabbani, Q.; Khan, A.; Quddoos, A. Modified Hungarian method for unbalanced assignment problem with multiple jobs. Appl. Math. Comput.; 2019; 361, pp. 493-498. [DOI: https://dx.doi.org/10.1016/j.amc.2019.05.041]
28. Heist, N.; Hertling, S.; Ringler, D.; Paulheim, H. Knowledge Graphs on the Web—An Overview. arXiv; 2020; arXiv: 2003.00719
29. Veličković, P. Everything is connected: Graph neural networks. Curr. Opin. Struct. Biol.; 2023; 79, 102538. [DOI: https://dx.doi.org/10.1016/j.sbi.2023.102538] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36764042]
30. Camacho, D.; Panizo-LLedot, A.; Bello-Orgaz, G.; Gonzalez-Pardo, A.; Cambria, E. The four dimensions of social network analysis: An overview of research methods, applications, and software tools. Inf. Fusion; 2020; 63, pp. 88-120. [DOI: https://dx.doi.org/10.1016/j.inffus.2020.05.009]
31. Ideker, T. Network genomics. Ernst Scher. Res. Found. Workshop; 2007; 61, pp. 89-115.
32. Pierskalla, W.P. The multidimensional assignment problem. Oper. Res.; 1968; 16, pp. 422-431. [DOI: https://dx.doi.org/10.1287/opre.16.2.422]
33. Balas, E.; Saltzman, M.J. Facets of the three-index assignment polytope. Discret. Appl. Math.; 1989; 23, pp. 201-229. [DOI: https://dx.doi.org/10.1016/0166-218X(89)90014-0]
34. Spieksma, F.C. Multi index assignment problems: Complexity, approximation, applications. Nonlinear Assignment Problems: Algorithms and Applications; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1-12.
35. Kosowsky, J.; Yuille, A. The invisible hand algorithm: Solving the assignment problem with statistical physics. Neural Netw.; 1994; 7, pp. 477-490. [DOI: https://dx.doi.org/10.1016/0893-6080(94)90081-7]
36. Koehl, P. Extreme points of general transportation polytopes. arXiv; 2024; arXiv: 2404.16791
37. Mendelsohn, N.S.; Dulmage, A.L. The convex hull of sub-permutation matrices. Proc. Am. Math. Soc.; 1958; 9, pp. 253-254. [DOI: https://dx.doi.org/10.1090/S0002-9939-1958-0095128-8]
38. Brualdi, R.A.; Lee, G.M. On the truncated assignment polytope. Linear Algebra Its Appl.; 1978; 19, pp. 33-62. [DOI: https://dx.doi.org/10.1016/0024-3795(78)90004-6]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Linear assignment problems hold a pivotal role in combinatorial optimization, offering a broad spectrum of applications within the field of data sciences. They consist of assigning “agents” to “tasks” in a way that leads to a minimum total cost associated with the assignment. The assignment is balanced when the number of agents equals the number of tasks, with a one-to-one correspondence between agents and tasks, and it is and unbalanced otherwise. Additional options and constraints may be imposed, such as allowing agents to perform multiple tasks or allowing tasks to be performed by multiple agents. In this paper, we propose a novel framework that can solve all these assignment problems employing methodologies derived from the field of statistical physics. We describe this formalism in detail and validate all its assertions. A major part of this framework is the definition of a concave effective free energy function that encapsulates the constraints of the assignment problem within a finite temperature context. We demonstrate that this free energy monotonically decreases as a function of a parameter
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 Department of Computer Science, University of California, Davis, CA 95616, USA
2 CNRS, CEA, Institut de Physique Théorique, Université Paris-Saclay, 91191 Gif-sur-Yvette, France;