Published for SISSA by Springer
Received: November 20, 2014 Revised: February 17, 2015 Accepted: March 22, 2015
Published: April 20, 2015
JHEP04(2015)108
Reducing di erential equations for multiloop master integrals
Roman N. LeeBudker Institute of Nuclear Physics, Novosibirsk, 630090 Russia
E-mail: mailto:[email protected]
Web End [email protected]
Abstract: We present an algorithm of the reduction of the di erential equations for master integrals the Fuchsian form with the right-hand side matrix linearly depending on dimensional regularization parameter [epsilon1]. We consider linear transformations of the functions column which are rational in the variable and in [epsilon1]. Apart from some degenerate cases described below, the algorithm allows one to obtain the required transformation or to ascertain irreducibility to the form required. Degenerate cases are quite anticipated and likely to correspond to irreducible systems.
Keywords: NLO Computations
ArXiv ePrint: 1411.0911
Open Access, c
[circlecopyrt] The Authors.
Article funded by SCOAP3. doi:http://dx.doi.org/10.1007/JHEP04(2015)108
Web End =10.1007/JHEP04(2015)108
Contents
1 Introduction 1
2 Preliminaries 2
3 Reduction at one point 5
4 Global reduction 11
5 Reduction process 14
6 Factoring out [epsilon1] 15
7 Using block-triangular form 16
8 Example 19
9 Conclusion 23
A The form of matrices S1 and S2. 24
1 Introduction
For a few last decades, the demand for the multiloop calculations is constantly growing, the methods of such calculations evolved accordingly. For multiscale integrals, probably, the most powerful technique is the di erential equations method [15]. Within this method, the master integrals are found as solutions of the di erential equations obtained with the help of the IBP reduction [68].
Recently, a remarkable observation has been made by Henn in ref. [9] concerning the di erential equations method. Namely, it appeared that in many cases the dependence on the dimensional regularization parameter [epsilon1] of the right-hand side of the di erential equations for the master integrals can be reduced to a single factor [epsilon1] by a judicious choice of the master integrals. For brevity in what follows we will refer to such a form of the di erential system as [epsilon1]-form. With this form (and also the initial conditions) at hand, nding the solution up to any xed order in [epsilon1] becomes a trivial task. Moreover, the solution manifestly possesses a remarkable property of homogeneous transcendental weight. Since then a number of papers successfully applied this approach to the calculation of various classes of integrals [1020].
In general, nding an appropriate basis is not easy. In ref. [9] two guiding principles have been suggested. The rst method is based on the examination of generalized unitarity cuts, and the second one is based on nding integral d log form. Both methods may be
1
JHEP04(2015)108
used (with some amount of heuristic work) for determining whether a specic integral is homogeneous or not, however, in general, they do not give an algorithm of nding appropriate basis (though, they proved their validity in a number of applications). In refs. [14, 21] algorithms of the reduction have been presented assuming a very special form of the di erential system. Despite these advances, nding an appropriate basis has been rather an art than a skill so far. Therefore, devising a practical algorithm of nding the described form of the di erential system is of essential interest.
In the present paper we describe a method of nding an appropriate basis which is based on the di erential system alone. The system can be written in the matrix form
@xJ = M ([epsilon1], x) J , (1.1)
where [epsilon1] is the dimensional regularization parameter (d = 4 2[epsilon1]), x is some parameter, J
is the column of the master integrals, M is n [notdef] n matrix, rational in both [epsilon1] and x.
Our main algorithm can be divided into three stages. At rst stage the di erential system is reduced to the Fuchsian form, i.e., to a form when the elements of M have only simple poles with respect to x. After this stage, the matrix can be written as
M ([epsilon1], x) =Xk
Mk([epsilon1]) x xk
JHEP04(2015)108
. (1.2)
Note that this step is always doable for the systems with regular singularities. Possibility to reduce the system to Fuchsian form is known since works [22, 23] of Rhrl and the specic algorithm for this reduction can be easily deduced from that of Barkatou & Pgel [24, 25], see below. Algorithm 2 of the present paper is advantageous only in that it tries to minimize the number of apparent singularities generated during the reduction process. At second stage the eigenvalues of Mk are normalized, i.e., their real parts are reduced to the interval [1/2, 1/2). For the systems reducible to [epsilon1]-form this means that all eigenvalues are made
proportional to [epsilon1]. It is easy to see that, when this step is successful, the resulting system has no apparent singularities, see eq. (3.33) and discussion after it. Finally, a constant transformation is searched for in order to factor out [epsilon1], i.e., to reduce the system to [epsilon1]-form. We give one nontrivial example of the application of our algorithm.
Except for the last stage, our algorithm is not specic to the systems depending on parameter. In particular, it can be used to eliminate apparent singularities and to nd the matrices of monodromy around singular points (up to similarity).
2 Preliminaries
We consider the system of di erential equations for the master integrals as given in eq. (1.1). Under the change of functions
J = T ([epsilon1], x)
eJ (2.1) the system modies to an equivalent system
@x
eJ =
eM ([epsilon1], x)
eJ , (2.2)
2
where
eJ . (2.4)
Though it is not stated explicitly in ref. [9], we will require that the matrix S has a Fuchsian form, i.e.,
S(x) =Xk
Sk
x xk
eM = T1MT T1@xT . (2.3) The observation of ref. [9] states that it is often possible to nd a transformation T so that the new column
eJ satises a simple equation @x
eJ = [epsilon1]S(x)
, (2.5)
where k runs over nite set. This condition is very important on its own because the form (2.5) allows one to express the result in terms of generalized harmonic polylogarithms. In what follows we will often omit [epsilon1] in the arguments of functions unless it may lead to confusion.
Denition 1. The di erential system (1.1) is said to have a regular singularity at x = x0 [negationslash]= 1 (at x = x0 = 1) if x = x0 [negationslash]= 1 is a singular point of M(x) (y = 0 is a singular
point of M(1/y)/y2) and all solutions of the system grow at most like a nite power of x x0 (of x) in the sectorial vicinity of x0.
The power-like growth of the master integrals (which are the solutions of the system) in the vicinity of any point follows from their parametric representation. Therefore, it is natural to expect that all singular points of the di erential system for the master integrals are regular singularities.
An apparent singularity is a regular singularity which is a nite-order pole or a regular point of any solution of the system. Therefore, the monodromy around an apparent singularity is an identity matrix. As we shall see, it means that, locally, we can always remove apparent singularity with a rational transformation.
Denition 2. The di erential system (1.1) is said to have Poincar rank p [greaterorequalslant] 0 at the singular point x = x0 [negationslash]= 1 if M(x) can be represented as M(x) = A(x x0)/(x x0)1+p,
where A(x) is regular at x = x0 matrix and A(0) [negationslash]= 0. The system is said to have Poincar
rank p [greaterorequalslant] 0 at the point x = 1 if M(x) can be represented as M(x) = A(1/x)x1+p, where A(y) is a regular at y = 0 matrix and A(0) [negationslash]= 0.
If p = 0, we say that the system is Fuchsian in x = x0 and call A(0) a matrix residue. Respectively, we call x0 a Fuchsian point of the system.
It is easy to show that when the Poincar rank of a system is zero at some point, this point is a regular singularity of the system. But the converse is not always true. However, if some point is a regular singularity, it is possible to transform the system to the equivalent one with zero Poincar rank at that point. More generally, Moser [26] has given necessary and su cient condition of the possibility to reduce the (generalized) Poincar rank of the system and also presented an algorithm for nding the appropriate transformation matrix. Barkatou and Pgel have given an improved version of the algorithm in refs. [24, 25]. Their
3
JHEP04(2015)108
algorithm consists of a sequence of rational transformations, each lowering the generalized Poincar rank p + r/n 1, where r = rank A(0) and n is the size of A(0). Applying
these transformations several times for each singularity, one can minimize the Poincar rank of all singularities, except maybe one (usually chosen to be x = 1). In particular, if
all singularities are regular, after the application of the algorithm, Poincar ranks for all but one singularities can be nullied and thus the system is reduced to a Fuchsian form everywhere, except, may be, one point. In fact, their algorithm also allows one to transform a regular system to Fuchsian form globally with a penalty of introducing some apparent singularities.
The possibility to transform a regular system to Fuchsian form in all points and to eliminate all apparent singularities would mean the positive solution of the 21st Hilbert problem, consisting of proving of the existence of linear di erential equations having a prescribed monodromy group. However, Bolibrukh in ref. [27] has proved by presenting an explicit counterexample, that it is not always possible and thus 21st Hilbert problem has negative solution. Nevertheless, the problem of reducing, when it is possible, a rational di erential system to Fuchsian form without apparent singularities is very important. An ultimate solution of this problem in the most general case, and, in particular, deciding whether such a reduction is possible, is not known so far to the best of our knowledge.
Denition 3. The transformation (2.3) generated by the matrix T(x) is regular at x = x0 [negationslash]= 1 (at x = 1) if T(x) = T0 + O(x x0) (T(x) = T0 + O(1/x)) and det T0 [negationslash]= 0.
In this denition the condition det T0 [negationslash]= 0 simply states that T1(x) is also a power
series near the point x = x0 (x = 1). Naturally, regular transformations can not change
the pole order of M, so we have to consider singular transformations. While there are transformations singular at only one point on the extended complex plane, their form appears to be too restrictive for our purposes.1 The key tool of our approach is the transformation singular at two points.
Denition 4. A balance is a transformation, generated by the matrix T of the form
T(x) = B(P, x1, x2[notdef]x) def= P + c
x x2
x x1
JHEP04(2015)108
P , (2.6)
where c is some constant, P, P are the two complementary projectors, i.e. P2 = P and
P = I P. More specic, we call the transformation generated by (2.6) the P-balance
between x1 and x2.
Note that this transformation appears in the consideration of the Riemann problem in complex analysis, see, e.g., ref. [28]. We will always put c = 1 when both x1 and x2 are nite. When x1 = 1 (when x2 = 1), we put c = x1 (c = 1/x2) and understand
c(x x2)/(x x1) as a limit for x1 ! 1 (for x2 ! 1).
The inverse of the balance is also a balance, since
B(P, x1, x2[notdef]x)B(P, x2, x1[notdef]x) = I , (2.7)
1See, however section 7.
4
Therefore, the transformation (2.6) is regular everywhere, except the points x = x1 and x = x2, where, respectively, T(x) and T1(x) have simple poles.
3 Reduction at one point
The basic idea of reducing the Poincar rank is to nd such a projector P that the transformation generated by (2.6) lowers the rank of A0. For a regular singularity, the idea is to use (2.6) to normalize the eigenvalues of the matrix residue.
Let us concentrate on the reduction of the di erential system at one point. Without loss of generality, we assume that x = 0 is a singular point of the system (1.1) and the Laurent series expansion of M(x) near x = 0 has the form
M(x) = A0xp1 + A1xp + O(xp+1) . (3.1)
Lowering Poincar rank. First, let us consider the problem of lowering of the Poincar rank, so p > 0 in this subsection. We assume that A0 is a nilpotent matrix since it is a necessary condition for the existence of a transformation which lowers the Poincar rank [26]. Therefore, A0 can be reduced to Jordan form with zero diagonal. Let r =
rank A0, then a necessary and su cient condition of existence of a transformation lowering the generalized Poincar rank p + r/n 1 introduced in ref. [26] is thatxr det(A0/x + A1 I)[notdef]x=0 = 0 (3.2)
identically as a function of .
It is convenient to use an equivalent form of this condition, which was introduced in ref. [25]. Let [notdef]u( )k[notdef]k = 1 . . . N, = 0, . . . nk[notdef] be a basis constructed of the generalized
eigenvectors of A0 with the properties
A0u(0)k = 0 , A0u( +1)k = u( )k. (3.3)
Here N is a number of Jordan cells (including the trivial ones), nk is a rank of k-th Jordan cell, which is its dimension minus one. In what follows we assume that Jordan cells are ordered by their sizes, so that n1 [greaterorequalslant] n2 [greaterorequalslant] . . . [greaterorequalslant] nN. Let
U = u(0)1, . . . , u(n1)1, u(0)2, . . . , u(n2)2, . . .
[parenrightBig]
be the matrix with columns u( )k. This matrix generates the similarity transformation A0
!
eA0 = U1A0U reducing A0 to Jordan form. Then
U1 = (v(n1)1, . . . , v(0)1, v(n2)2, . . . , v(0)2, . . .) , (3.5)
where v( )k are the generalized eigenvectors of A0 satisfying
v(0)kA0 = 0 , v( +1)kA0 = v( )k . (3.6)
We will call v( )k the left generalized eigenvectors of A0, in contrast to u( )k which we will call the right generalized eigenvectors of A0.
5
JHEP04(2015)108
(3.4)
From U1U = I we have
ck the number of the column in which u( )k stands in U. The condition (3.2) can be written as [24, 25]
det L( ) = det(L0 + L1) = 0 , (3.10)
where
L( ) = L0 + L1 = [v(0)k(A1 + I)u(0)l] (k, l = 1 . . . N). (3.11)
The transformation (3.8) induces the following transformation of the matrix L0:
0 ! (I c nknl (l,k))L0(I + c (l,k)) , (3.12)
where (l,k) is the matrix with unity on the intersection of l-th row and k-th column and zero elsewhere, i.e. (l,k)ij = il jk. It is easy to check that L1 is invariant under these transformations. General composition of the transformations of the form (3.9) can be written as
U ! U(I + E) , (3.13)
0 ! (I
e )L0(I + ) , (3.14) E =
Xl,k; l<k
cl,kE(l,k) , = Xl,k; l<k
cl,k (l,k) . (3.15)
e can be derived from the representation I + =
Q(I + ci (li,ki)), but its explicit form is irrelevant for further discussion. What is relevant, is that, given an arbitrary uppertriangular matrix with zero diagonal, we can easily reconstruct E.
Our idea now is to use transformations (3.12) for the reduction of the matrix L to some suitable form, allowing for simple determination of the appropriate projector P for the rank-reducing transformation (2.6). Namely we have the following
6
v( )ku( )l = kl + ,nk , (3.7)
so that [notdef]u( )k[notdef]k = 1, . . . , N; = 0, . . . , nk[notdef] and [notdef]v( )k[notdef]k = 1, . . . , N; = nk, . . . , 0[notdef] are the
dual bases.
One observes that relations (3.3), (3.6), (3.7) are invariant under the following basis transformation:
u( )k ! u( )k + cu( )l , v(nl )l ! v(nl )l cv(nk )k, ( = 0, 1, . . . nk) , (3.8)
where c is an arbitrary number, and k and l are some xed Jordan cell numbers, k > l (we remind that n1 [greaterorequalslant] n2 [greaterorequalslant] . . . [greaterorequalslant] nN in our convention).
The above transformation corresponds to the transformation of the matrix U:
U ! U(I + cE(l,k)) , (3.9)
where (E(l,k))
bi
cj = il jk . Here we denoted by
JHEP04(2015)108
The expression for
, 1, . . . , 1) and (3.10)
holds.
Output: [notdef]k0, S, [notdef], where is uppertriangular with zero diagonal such that the
transformation (3.14) results to L0 of the form described in Claim 1 with the corresponding k0 and S.
1 begin
2
S ;
3
zero matrix.
4
repeat
5
Construct
eL0 = (a1, a2, . . .) by striking out from L0 all rows with numbers
from S. Below ai denotes the i-th column of this matrix.
6
Find the minimal i such that i [negationslash]2S and i-th column of
Input : Matrix L0 and integer r, such that L1 = diag(0, . . . , 0
[bracehtipupleft] [bracehtipdownright][bracehtipdownleft] [bracehtipupright]
r
eL0 is linearly dependent on rst i 1 columns: ai = c1a1 + . . . + ci1ai1.
0 c1 (1,i) . . . ci1 (i1,i)
e 0 c1 n1ni (1,i) . . . ci1 ni1ni (i1,i) 9
0 (I
e 0)L0(I + 0)
+ 0 + 0
until i [lessorequalslant] r;
Algorithm 1. Reducing L0.
Claim 1. Using the transformations (3.12) it is possible to secure that (L0)jk = 0 for any j and k satisfying
j [negationslash]2S & k 2 S [ [notdef]k0[notdef] , (3.16)
where k0 is a number of nontrivial Jordan cell (so that nk0 [negationslash]= 0) and S is some set of the
numbers of trivial Jordan cells, i.e. for any i 2 S holds ni = 0.
A constructive proof of this claim is given in algorithm 1.
The transformation on line 9 guarantees that any i-th column of
eL0 with i 2 S is zero. It may be not obvious why it is always possible to nd appropriate i on line 6 when
S contains only numbers larger than r. To explain this, let us examine the form of the matrix L( ) after m passes of the repeat loop. Then S = [notdef]i1, . . . im[notdef], where ij > r
is the number appearing at pass #j. Let L[prime]( ) denote a matrix obtained from L( ) by simultaneous rearrangement of columns and rows in such a way that ik-th column (and row) of the latter is k-th-to-last of the former. Then L[prime]( ) has the following block form
L[prime]( ) = X( ) 0
Y Z( )
[parenrightBigg]
7
JHEP04(2015)108
7
8
10
11
12
S S [ [notdef]i[notdef]
13
return [notdef]i, S/[notdef]i[notdef], [notdef]
, (3.17)
Input : Matrix M(x) appearing in the right-hand side of the di erential equation. Output: Transformation matrix T(x) transforming M(x) to
eM(x), such that eM(x)
is Fuchsian at any point.
1 begin
2
eM M(x) 3
T identity matrix
4
while there is a point with positive Poincar rank do
5
if there is a pair of singular points x1 and x2, such that1. Poincar rank of the system at x = x1 is positive
2. It is possible to construct the projector Q as in eq. (4.6)
6
then
7
T0 B(Q, x1, x2[notdef]x)
8
eM T10 eMT0 T10@xT0
T TT0
Let x1 be the point with positive Poincar rank.
Choose arbitrary regular point x2.
T0
eM T10
T TT0
Algorithm 2. Reduction to Fuchsian form.
where Z( ) is a lower-triangular m [notdef] m matrix with diagonal elements equal to . Then,
from the condition det L[prime]( ) = det L( ) = 0, we obtain det X( ) = 0, and, in particular,
det X(0) = 0 . (3.18)
Now we note that the columns of X(0) coincide, up to rearrangement, with the eligible columns of
eL0 on line 5 of the algorithm, and the condition (3.18) tells that there is a linear dependency between them. Thus, it is indeed possible to nd i as prescribed in line6. The algorithm terminates at most when all i > r are already included in S.
Now we can use the output of algorithm 1 for the construction of the appropriate
projector, such that the transformation (2.6) strictly lowers the rank of A0. First, we use for the reconstruction of the matrix E. To this end it su ces to represent as a linear combination of (l,k). Trivially, =
Pl,k; l<k lk (l,k), so E = Pl,k; l<k lkE(l,k). Using this matrix, we apply transformation (3.13) to U. Let now u( )k and v( )k be dened via eqs. (3.4) and (3.5) for the transformed U.
Claim 2. The transformation generated by
T = B(P, 0, x2[notdef]x) , (3.19)
8
JHEP04(2015)108
9
10
11
else
12
13
B(P, x1, x2[notdef]x), where P is dened in eq. (3.20)
14
eMT0 T10@xT0
15
16
return T
where x2 [negationslash]= 0 and
strictly lowers the rank of A0.
The proof is very simple. We note that A0P = 0 and the Laurent expansion of the transformed matrix
eM near x = 0 has the form
eM(x) = eA0xp1 + O(xp) , (3.21)
where
eA0 = PA0 + PA1P . (3.22)
In order to prove that
(L0)jkv(0)k (j [negationslash]2S) . (3.23)
But, according to the Claim 1, (L0)jk = 0 in the sum. So, we have proved that all eigenvectors of A0 remain to be the eigenvectors of
eA0. Obviously, we have an extra eigenvector of the latter, namely, v(nk0)k0, since v(nk0)k0P = 0.
Applying (3.19) several times, we lower the rank of the leading coe cient A0 until it becomes zero (and thus A0 itself is zero). This lowers the Poincar rank by one. Acting in the same way, we nally lower the Poincar rank to zero.
Algorithm 1 as well as the transformation (3.19) are very similar to those presented in refs. [24, 25]. Moreover, our transformation is not optimal in a sense of [25]. The only advantage of our transformation (3.19) is that it gives as few terms in the sum in eq. (3.20) as possible. This will be helpful for the constructions of section 4.
Normalizing eigenvalues in Fuchsian singularities. The results of the previous subsection allow one to reduce the Poincar rank at one point in a stepwise manner provided
A0 is nilpotent and (3.2) holds. If at some step either of these two conditions fails, then the point is irregular. Otherwise, we can lower Poincar rank to zero, i.e., make system Fuchsian at a given point. The question remains whether we can do still better can we nd a rational transformation that will restrict the form of the matrix residue? In this subsection we assume that p = 0 in eq. (3.1), i.e., that the Laurent series expansion of
M(x) near x = 0 has the form
M(x) = A0/x + A1 + O(x) . (3.24)
9
P =
Xk
2S[[notdef]k0[notdef]
u(0)kv(nk)k = u(0)k0v(nk0)k0 +
Xk2S
u(0)kv(0)k (3.20)
eA0 has matrix rank strictly smaller than that of A0 it is su cient to demonstrate that
eA0 has more eigenvectors (with zero eigenvalue) than A0. Let us check that any left eigenvector v(0)j of A0 remains an eigenvector of
eA0. This is obvious for j 2 S since v(0)j2SP = 0. Let now j [negationslash]2S. Then v(0)jP = v(0)j (in particular, this is valid for j = k0 since v(0)k0u(0)k0 = 0). Then
v(0)j
eA0 = v(0)j(A0 + A1P) = v(0)jA1P =
Xk 2S[[notdef]k0[notdef]
JHEP04(2015)108
Similar to the previous subsection, let
{u( )k[notdef]k = 1 . . . N, = 0, . . . nk[notdef] (3.25)
be a basis constructed of the generalized eigenvectors of A0 with the properties
A0u(0)k = ku(0)k , A0u( +1)k = ku( +1)k + u( )k. (3.26)
The vectors of the dual basis [notdef]v(n1)1, . . . , v(0)1, v(n2)2, . . . , v(0)2, . . .[notdef] obey orthonormality condition (3.7) and satisfy
v(0)kA0 = kv(0)k , v( +1)kA0 = kv( +1)k + v( )k. (3.27)
Let us consider the transformation generated by B(P, 0, x2[notdef]x), where
P = u(0)1v(n1)1 . (3.28)
Since PA0P = 1PP = 0, the Laurent series expansion near x = 0 of the transformed matrix
eM starts from x1:
eM(x) = eA0/x + O(x0) (3.29)
with
eA0 = PA0 + A0P + P + PA1P . (3.30) Proposition 1. With the account of multiplicity, only one eigenvalue of
eA0 is di erent
from the corresponding eigenvalue of A0. Namely, 1 changes to 1 + 1.
The proof of this proposition becomes obvious if one examines the form of
eA0 in the basis (3.25) and calculates its characteristic polynomial. Indeed, in the basis (3.25), matrix
A0 has the following form A0 = diag( 1, . . .) + diag(1)(f1, f2, . . .), where diag(1) denotes the matrix with f1, f2, . . . standing above the diagonal and zero elsewhere, fi = 0 or 1. Then
eA0 = c1 (1, 0, . . .) + diag( 1 + 1, . . .) + diag(1)(0, f2, . . .) , (3.31)
where c1 is the rst column of the matrix A1. So, the matrix
eA0 di ers from A0 only in the rst column and rst row. Obviously, the characteristic polynomial of the former is
P (
eA0, ) = ( 1 + 1 )P (A0, )/( 1 ).
Similar, B(u(n1)1v(0)1, x2, 0[notdef]x) shifts one eigenvalue down. Thus we come to the following
Claim 3. Using balances
B(u(0)1v(n1)1, 0, x2[notdef]x) ,
B(u(n1)1v(0)1, x2, 0[notdef]x) , (3.32)
it is possible to reduce the matrix residue to the normalized form in which all its eigenvalues have the real parts lying in the interval [a, a + 1), where a is a real number.
10
JHEP04(2015)108
Usual choice is a = 0, however we will prefer a = 1/2 due to the reasons which should
be clear from the consideration below. Note that in this normalized form the monodromy matrix for the small loop around x = 0 is given, up to similarity, by
M= exp[2iA0] . (3.33)
Thus, using the results of this subsection and the previous one, we can simply nd the monodromy matrix around any regular point of the di erential system. In particular, we can detect whether a given point is an apparent singularity (i.e., the monodromy is an identity). To this end, we note that, given A0 is normalized and eq. (3.33) denes an identity matrix, one may easily conclude that A0 = 0 (by considering the matrix function of the Jordan form). Therefore, normalization totally eliminates any apparent singularity.
Note that if the matrix residue is not normalized, in general, the monodromy matrix is not given by eq. (3.33) due to resonances (the eigenvalues of A0, whose di erence is an integer number).
4 Global reduction
The transformations considered in the previous section have a serious aw: while improving the form of the matrix at one point, they, in general, worsen its form in another. In principle, the reduction of the Poincar rank to zero can always be done at the cost of introducing some apparent Fuchsian singularities. This is because balances may increase the pole order at most by one. So, choosing at each step a regular point as x2, we can
globally reduce the Poincar rank to zero. However, we, of course, would like to avoid generating unnecessary apparent singularities in the process of reducing the Poincar rank. The situation is di erent when we want to normalize all Fuchsian singularities. In this case we denitely do not want to generate apparent singularities, since any apparent singularity is not normalized (otherwise there would be no singularity at all). In the present section we show that, except for some degenerate cases, it is possible to slightly modify the projectors constructed in the previous section so that the resulting balances respect the Poincar rank at the second point.
Let us rst describe transformations which do not increase Poincar rank at any point. Suppose x1 and x2 are two nite singular points of the matrix M(x), so that the Laurent series around x1 and x2 have the form
M(x) = A0(x x1)p11 + O((x x1)p1) (4.1) M(x) = B0(x x2)p21 + O((x x2)p2) (4.2)
and p1 [greaterorequalslant] 0 , p2 [greaterorequalslant] 0 .
Claim 4. If Q is a projector such that Im Q and Ker Q are invariant subspaces of A0 and B0, respectively, then the transformation B(Q, x1, x2[notdef]x) does not increase the Poincar rank
of M at any point.
11
JHEP04(2015)108
The proof is straightforward after observing that Q satises
QA0Q = QB0Q = 0 . (4.3)
We stress that the claim is also valid when one or both points are Fuchsian.More explicitly, let [notdef]u1, . . . , um[notdef] span m-dimensional invariant space of A0. Suppose
that, among m-dimensional left invariant spaces of B0, there is one which allows for the basis [notdef]v1, . . . , vm[notdef] satisfying vjuk = jk . (4.4)
Such a basis for m-dimensional left space exists i the space does not contain a vector, orthogonal to all u1, . . . , um. Then
Q =
u(0)kvk (4.6)
where all notations are as in eq. (3.20) except that now vk span some left-invariant space of B0, but still satisfy vju(0)k = jk.
Claim 5. Let M(x) has Laurent series expansion near x = 0 as in (3.1) with p > 0 and that near x = x2 as in (4.2). Then the Q-balance between 0 and x2, eq. (2.6) with Q from eq. (4.6) strictly diminishes the matrix rank of A0 and does not increase the Poincar rank at any other point.
In order to prove this claim, let us use the identities
PQ = Q , QP = P (4.7)
and
A0Q = A0P = 0 . (4.8)
These identities simply follow from the denitions of the projectors P and Q, eqs. (3.20) and (4.6). Then
eA0 = QA0+QA1Q = (Q+P)PA0+(Q+P)PA1P(P+Q) = (Q+P)[PA0+PA1P](P+Q) . (4.9) The expression in square brackets is just the transformation of the leading coe cient generated by B(P, 0, x2[notdef]x). Taking into account that (Q + P) = (P + Q)1, we see that the
transformed leading coe cient
eA0 after the transformation T1 = B(Q, 0, x2[notdef]x) coincides with that after the transformation T2 = B(P, 0, x2[notdef]x)(P + Q) (Note that these transformations are nevertheless di erent, since T1 = (Q + P)T2). Then, the correctness of Claim 5 follows from that, on one hand, B(Q, 0, x2[notdef]x) satises conditions of Claim 4, and on the
12
JHEP04(2015)108
m
Xk=1ukvk (4.5)
is the projector satisfying conditions of Claim 4.Let us now consider the Q-balance between x1 and x2 with
Q =Xk 2S[[notdef]k0[notdef]
other hand the leading coe cient is transformed as though by the transformation which is a product of B(P, 0, x2[notdef]x), satisfying conditions of Claim 2, and constant nonsingular
matrix (which does not change the rank of A0).
Similar modications should also be made for the balances (3.32) used for the normalization of the matrix residue eigenvalues. We simply replace in their denitions the vectors v(n1)1 and u(n1)1 with v and u which are left and right eigenvectors of the matrix
B0, respectively, provided they satisfy vu(0)1 = 1 and v(0)1u = 1.
Claim 6. Let M(x) has Laurent expansion near x = 0 as in (3.24) and that near x = x2 as in (4.2). Let u and v be the right and left eigenvectors of A0 and B0, respectively. Then the B(uv, 0, x2[notdef]x) increases by one the eigenvalue of A0, corresponding to u, and does not
increase the Poincar rank at any point.
The proof is very similar to the previous case. Let now Q = uv and P be dened in (3.28) with u(0)1 = u. In addition to the identities (4.7) we use now
A0Q = Q , A0P = P . (4.10)
Then
eA0 =QA0+ A0Q+Q
[bracehtipupleft] [bracehtipdownright][bracehtipdownleft] [bracehtipupright]
/Q=(Q+P)Q
+QA1Q = (Q + P)PA0 + (Q + P)(A0 + I)Q + (Q + P)PA1P(P + Q)
= (Q+P)PA0+(Q+P)(A0+I)P(P+Q)+(Q+P)PA1P(P + Q) = (Q + P)[PA0 + A0P + P + PA1P](P + Q) , (4.11)
where in the last transition we used the identity PA0 = PA0(P+Q). Again, we see that the expression in square brackets is just the transformation of the leading coe cient generated by B(P, 0, x2[notdef]x). Since
JHEP04(2015)108
eA0 is, up to a similarity, the same as in (3.30), the Proposition 1 proves the claim.
If the second point is also Fuchsian, this transformation simultaneously shifts in the opposite direction the eigenvalue of the matrix B0, corresponding to v and u, respectively.
Therefore, the process of normalization resembles balancing the scales, this is the reason why we call the transformation (2.6) the balance.
Denition 5. We say that the Fuchsian point x1 can be balanced with the singular point x2 [negationslash]= x1 if at least one of the two conditions holds1. there exist u and v, right and left eigenvectors of A0 and B0, such that vu = 1 and the real part of the eigenvalue of A0, corresponding to u is less than 1/2.
2. there exist u and v, right and left eigenvectors of B0 and A0, such that vu = 1 and the real part of the eigenvalue of A0, corresponding to v is greater or equal than 1/2.
Here A0 and B0 are the matrix residues of the Laurent expansion of M(x) near x = x1 and x = x2, respectively. More specic, we say x1 can be balanced with x2 via B(uv, x1, x2[notdef]x)
or via B(uv, x2, x1[notdef]x), depending on whether the rst or second condition holds.
13
Denition 6. We say that two Fuchsian points x1 and x2 [negationslash]= x1 can be mutually balanced
if at least one of the two conditions holds
1. there exist u and v, A0u = u, vB0 = [notdef]v, such that [Rfractur] < 1/2, [Rfractur][notdef] [greaterorequalslant] 1/2,
and vu = 1.
2. there exist u and v, B0u = u, vA0 = [notdef]v, such that [Rfractur] < 1/2, [Rfractur][notdef] [greaterorequalslant] 1/2,
and vu = 1.
Here A0 and B0 are the matrix residues of the Laurent expansion of M(x) near x = x1 and x = x2, respectively. More specic, we say that x1 and x2 can be mutually balanced via
B(uv, x1, x2[notdef]x) or via B(uv, x2, x1[notdef]x), depending on whether the rst or second condition
holds.
The reason for these denitions is clear: if x1 can be balanced with some point, there exists a balance which moves one eigenvalue of matrix residue in x = x1 towards the interval [1/2, 0). If the two points can be mutually balanced, there exists a balance
which moves one eigenvalue of matrix residue at x = x1 and that at x = x2 towards the interval [1/2, 1/2).
5 Reduction process
The transformations described in two previous sections give one much freedom in reducing a given system to a Fuchsian form and in normalizing eigenvalues of the matrix residues at Fuchsian points. Let us summarize the basic line of the reduction process in the form of two algorithms.
Note that this algorithm assumes that all singular points of the system are regular, so the transformation on line 13 can be always constructed. Let us comment on the condition 2 on line 5. This condition holds if it is possible to nd an invariant subspace of the matrix B0, which has a dual basis with [notdef]u(0)k, k 2 S [ [notdef]l[notdef][notdef], see (4.6). It appears to be a
nontrivial task due to the complexity of the set of invariant spaces of an arbitrary matrix, see, e.g. ref. [29]. However, one might try the subspace formed by the eigenvectors of B0, and consecutively add vectors from the Jordan chain if needed. If these attempts fail, one may simply go to line 10 with a penalty of possibly introducing an extra apparent singularity. Given that at the next stage this singularity is likely to disappear, this is not a real problem.
Next stage is described by the following algorithmThough being very useful, the above algorithm does not necessarily give a canonical form of M(x) in any sense. In particular, the outcome depends on the sequence of the pairs of points chosen at a specic step. However, in many tested cases, this algorithm succeeds in normalizing the system at all but one singular points, in particular, removing all apparent singularities. As it was already mentioned, the possibility of removing all apparent points is equivalent to the content of the 21st Hilbert problem. As proved by Bolibrukh [27], this task is not always possible to complete and, therefore, the 21st Hilbert problem has a negative solution. In his paper Bolibrukh presents an example of the system which can not
14
JHEP04(2015)108
be reduced to Fuchsian form without apparent singularities. We have checked, that our algorithm indeed fails to reduce this system. At some step it appears to be not possible to balance an apparent singularity with any other singular point due to the orthogonality of the corresponding eigenvectors.
On the other hand in the same paper it was proved that for n = 2 the 21st Hilbert problem can always be solved. For our setup, it translates to the statement that, given a Fuchsian system of two equations, it is always possible to get rid of the apparent singularities. Let us show that the tools developed in this section easily allow one to perform this task, thus, giving a constructive proof of the statement. Our line of reasoning is very simple: we show that it is always possible to shift the eigenvalues of the matrix residue in the apparent singularity towards the interval [1/2, 1/2) without introducing new appar
ent points and increasing the pole order. The eigenvalues of the matrix residue in apparent singularity should denitely be integer, otherwise, we may show that the point is not an apparent singularity by normalizing the system at this point (possibly spoiling its form in others) and calculating the monodromy from eq. (3.33). Moreover, when both eigenvalues are zero, the whole matrix should be zero. Then, in a nite sequence of shifts we will eventually eliminate singularity. Eliminating singularities one by one, we obtain the desired form.
Suppose x = 0 is the apparent singularity and A0 [negationslash]= 0 is a 2 [notdef] 2 matrix residue at this
point. Note that the di erential system in Fuchsian form can not have only one singular point, so we may rely on the existence of at least one singularity di erent from x = 0. If both eigenvalues of A0 are nonzero and of the same sign, we may use the transformation T = x
xx2 I or T = xx2x I to raise or lower both eigenvalues. Here x2 is some other singular
point. Thus, we may restrict ourselves to the case when, say, one eigenvalue is negative and the other one is non-negative. Suppose that A0 = diag(n1 < 0, n2 [greaterorequalslant] 0). The right eigenvector of A0, corresponding to n1 is u = (1, 0). Suppose, all left eigenvectors of matrix residues at other singular points are orthogonal to u. Then, it is easy to show that the general form of these matrix residues is a b
0 a
. But this form is in obvious contradiction with the requirement that the sum of all matrix residues is zero. This is because the diagonal elements of this sum are n1 +
Pi ai and n2 + Pi ai which can not be both zero.
Therefore, there is a left eigenvector v of the matrix residue at some point x2, such that vu = 1 and x = 0 can be balanced with x = x2 via B(uv, 0, x2[notdef]x).
6 Factoring out [epsilon1]
So far, we described the constructions which are not specic to the systems depending on parameter. However, the idea of their application to the reduction of the systems, depending on [epsilon1], should be clear. First, we use algorithm 2 to reduce the system to Fuchsian form. A necessary condition of existence of the [epsilon1]-form (2.4) is that the eigenvalues of all matrix residues have the form n + [epsilon1], where n is integer. If this condition is not satised, then the system denitely can not be transformed to the form (2.4). In this case one might try some changes of variable.2 If the condition holds, one may pass to the algorithm 3 in
2Note that such a situation often happens for the integrals with massive internal lines. When passing back to the original variable one encounters transformations, involving algebraic functions (in particular, square roots).
15
JHEP04(2015)108
order to normalize eigenvalues of the matrix residue at all but one point x = x1, assuming [epsilon1] is su ciently small (i.e., assuming n + [epsilon1] belongs to the interval [1/2, 1/2) only if n = 0).
If this step appears to be doable, the normalized eigenvalues are all proportional to [epsilon1]. The sum of the eigenvalues in x = x1 is also proportional to [epsilon1] since the matrix residue at this last point is simply minus the sum of the matrix residues at the normalized points (and so the trace is minus sum of the traces). Then one should try to balance x = x1 in two steps. First, shift down one of the positive unnormalized eigenvalues by means of balance with some point x = x2, either singular or regular, and then mutually balance x1 and x2 shifting up one of the negative unnormalized eigenvalues of the matrix residue at x = x1.
Let us assume from now on that it appeared to be possible to secure by the above method that the system is Fuchsian and normalized at all points. Then we have a system
@xJ =
Xk
Mk([epsilon1]) x xk
J , (6.1)
and the eigenvalues of all matrices Mk are proportional to [epsilon1]. Clearly, this does not necessarily mean that matrices Mk themselves are proportional to [epsilon1]. If we had only one matrix M1([epsilon1]), we could have factorized [epsilon1] by making a transformation which transforms M1([epsilon1])/[epsilon1] to Jordan form. In general case we need to nd an x-independent transformation matrix which simultaneously transforms all matrices Mk([epsilon1]) to the form [epsilon1]Sk, where Sk are constant matrices.3 Let T([epsilon1]) be such a matrix. Then we have
T1([epsilon1])Mk([epsilon1])
[epsilon1] T([epsilon1]) = Sk = T1([notdef])
Mk([notdef])
JHEP04(2015)108
[notdef] T([notdef]) . (6.2)
Multiplying this equation by T([epsilon1]) from the left and by T1([notdef]) from the right, we obtain a linear system
M1([epsilon1])[epsilon1] T([epsilon1], [notdef]) = T([epsilon1], [notdef])
M1([notdef])
[notdef] ,
...
Mm([epsilon1])
[epsilon1] T([epsilon1], [notdef]) = T([epsilon1], [notdef])
Mm([notdef])
[notdef] (6.3)
for the elements of the matrix T([epsilon1], [notdef]) = T([epsilon1])T1([notdef]). If the general solution of this system (found routinely) determines an invertible matrix, the transformation we are looking for can be chosen as T([epsilon1]) = T([epsilon1], [notdef]0), where [notdef]0 is some arbitrarily chosen number, provided
T([epsilon1], [notdef]) is nonsingular at [notdef] = [notdef]0.
7 Using block-triangular form
The size n of the matrices M([epsilon1], x) appearing in the di erential equations for master integrals may be quite large ( several tens). This may constitute computational complications
3Note that any x-dependent rational transformation necessarily has at least one singular point and shifts the eigenvalues of the matrix residue in this point thus spoiling normalization. Normalization, in turn, necessarily holds for the [epsilon1]-form.
16
Input : Matrix M(x) appearing in the right-hand side of the di erential equation, having zero Poincar rank at all singular points.
Output: Transformation matrix T(x) transforming M(x) to
eM(x), such that eM(x)
is normalized at as many points as possible.
1 begin
2
eM M(x) 3
T identity matrix
4
Detect apparent singularities using the transformations (3.32)
5
Select a singular point x0 which is not an apparent singularity. If there are only apparent singularities, let x0 be one of them.
6
while there is a pair of points which can be mutually balanced or there is a point which can be balanced with x0 do
7
if there is a pair of singular points x1 and x2, which can be mutually balanced then
8
Let x1 and x2 can be mutually balanced via T0.
9
eM T10 eMT0 T10@xT0
T TT0
Let x1 can be balanced with x0 via T0.
eM T10
T TT0
Algorithm 3. Normalization.
for the transformations that we need. Fortunately, the very process of the derivation of the di erential equations, the IBP reduction, shows that M([epsilon1], x) contains a lot of zeros.
Namely, the integral J1 may enter the right-hand side of the di erential equation for the integral J2 only if the graph corresponding to J1 can be obtained from that corresponding to J2 by contraction of some edges. In particular, this means that the matrix M([epsilon1], x) has a block-triangular form with diagonal blocks corresponding to the integrals with a given set of denominators (= integrals of a given sector).
Let us show that we can use this block-triangular form to essentially alleviate the process of reduction. Suppose from now on that we have already reduced all diagonal blocks of M([epsilon1], x) to [epsilon1]-form. Basically, the idea of further reduction is simple. In order to reduce the pole order of the o -diagonal elements we redene the integrals by adding some suitable combination of the simpler integrals, similar to the approach of refs. [13, 14]. Let us prove that it is always possible to make this redenition in order to reduce the Poincar rank at a given point to zero without changing both the block-triangular structure of the system and the Poincar rank at other points. Therefore, it gives one a tool to reduce the system to Fuchsian form.
17
10
11
12
else
JHEP04(2015)108
13
eMT0 T10@xT0
14
15
return T
We prove by the induction over sectors. Without generality loss, we may assume that we are interested in reducing the Poincar rank to zero at x = 0.4 Suppose J1 is a column-vector of master-integrals in a certain sector . By the induction hypothesis the di erential system for the integrals in the subsectors of already has zero Poincar rank and thus no master in the subsectors will not be changed at this and later steps. We can write the di erential system for J1 in the form
x@xJ1 = [epsilon1]A(x)J1 + xrB([epsilon1])J2 + . . . , (7.1)
where J2 is the column-vector of the master integrals in the most complex subsector of entering the right-hand side of the equation with singular coe cient, whose Laurent expansion starts with xrB([epsilon1]) with r > 0. By the assumption, A(x) is regular at x = 0.
Naturally, the number of entries in J1 and J2 is not required to be the same, so, in general, B is a rectangular matrix. In eq. (7.1) the dots denote terms which are either nonsingular, or contain integrals in the less complex sectors than the sector of J2, or contain integrals J2 with coe cients less singular than xr. The di erential equation for J2 has the form
x@xJ2 = [epsilon1]C(x)J2 + . . . , (7.2)
where C(x) is regular at x = 0. The dots denote contribution of the subsectors. Let us make the substitution
J1 =
eJ1 + xrDJ2 , (7.3) where D is a constant matrix. We have
x@x
eJ1 = [epsilon1]A(x) eJ1 + xr [B([epsilon1]) + rD + [epsilon1]A(x)D [epsilon1]DC(x)] J2 + . . . . (7.4)
Therefore, in order to cancel xr singularity, we need to nd such D that
D + [epsilon1]
r [A(0)D DC(0)] =
1r B([epsilon1]) . (7.5)This is a system of linear equations for the matrix elements of D. This system obviously has a solution since the linear operator acting on D in the right-hand side is arbitrarily close to unity. Note that this line of reasoning does not work when the diagonal blocks are not in [epsilon1]-form and/or when r = 0. Therefore, starting from the most complex integrals in the right-hand side and from the highest poles in their coe cients, we can eliminate singular coe cients in the right-hand side, step-by-step. Note that the substitution (7.3) corresponds to the transformation generated by
T = I + N
xr , (7.6) where N is a matrix whose nonzero elements coincide with the elements of D. It is easy to see that N2 = 0, so that the inverse matrix has the form
T1 = I
Nxr . (7.7)
Therefore, this transformation is regular everywhere, except x = 0.
4In what follows, when speaking about singularity and Poincar rank we often omit references to x = 0 for brevity.
JHEP04(2015)108
18
p2
p4
p1
p3
Figure 1. Three-loop XX-box topology. Internal dashed lines denote massless propagators, p21 = p22 = p23 = p24 = 0, (p1 + p2)2 = s, (p1 p3)2 = t.
Now we may assume that we have a Fuchsian block-triangular matrix M([epsilon1], x) such that each diagonal block is in [epsilon1]-form. Since the characteristic polynomial of this matrix is a product of those of the diagonal blocks, the eigenvalues of M([epsilon1], x) are proportional to [epsilon1] and we have a system of the form (6.1). In order to nd a transformation matrix
T([epsilon1]) from (6.2), which, in addition, preserves the block-triangular form of M(x), we may nullify in all elements of T([epsilon1], [notdef]), corresponding to zero elements of M([epsilon1], x), before solving the system (6.3).
8 Example
Let us demonstrate in some details how our method works for the master integrals in the topology shown in gure 1. There are 28 master integrals shown in gure 2. We use an experimental version of LiteRed, [30, 31], for the IBP reduction. Unfortunately, due to the complexity of the IBP reduction, we have not been able to obtain starting di erential equations for the 3 master integrals in the highest sector, shown in the last row, so we had to limit ourselves to the di erential equations for 25 master integrals J = (J1, . . . , J25)T .
They depend nontrivially on the dimensionless variable x = t/s. The di erential system has the form (1.1) where the explicit form of the matrix M([epsilon1], x) is not presented here to save space and to avoid cluttering. There are three singular points of the system, x = 0, 1, 1. Note that these points correspond to the conditions t = 0, u = 0, and
s = 0, respectively. Nontrivial diagonal blocks of M have indices [notdef]9, 10[notdef], [notdef]11, 12[notdef], [notdef]16, 17[notdef], {18, 19[notdef], [notdef]20, 21, 22[notdef], [notdef]23, 24, 25[notdef]. Let us explain how our algorithm works on the example
of the block spanned by indices [notdef]23, 24, 25[notdef]. It has the form
M[notdef]2325[notdef]([epsilon1], x) = A([epsilon1])/x + B([epsilon1])/(x + 1) , (8.1)
where
A([epsilon1]) =
JHEP04(2015)108
0
B
B
@
[epsilon1] 1 0 [epsilon1]+15[epsilon1]+1 2(4[epsilon1] + 1)(5[epsilon1] + 1) 3[epsilon1] 1 2([epsilon1] + 1)
(2[epsilon1]+1)(4[epsilon1]+1)(5[epsilon1]+1)[epsilon1]+1 0 5[epsilon1] + 1
1
C
C
A
, B([epsilon1]) =
0
B
B
@
3[epsilon1]
[epsilon1] 5[epsilon1]+1
[epsilon1]+1 5[epsilon1]+1
.
(8.2)
Since M[notdef]2325[notdef]([epsilon1], x) already has a Fuchsian form, we skip steps described in algorithm 2
and pass to the algorithm 3. From now on let us denote the matrix residue at innity
19
0 [epsilon1] 1 2([epsilon1] + 1)
0 [epsilon1](2[epsilon1]+1)[epsilon1]+1 [epsilon1]
1
C
C
A
J1 J2 J3 J4 J5 J6 J7 J8
J9 J10 J11 J12 J13 J14 J15 J16 J17
J18 J19 J20 J21 J22 J23 J24 J25
J26 J27 J28
Figure 2. Master integrals of the topology in gure 1. Integrals J2628 are determined with the
help of Mint.
as C([epsilon1]),C([epsilon1]) = A([epsilon1]) B([epsilon1]). (8.3)
The eigenvalues of the matrices A, B, and C are, respectively
A: [notdef]3[epsilon1] 1, [epsilon1], 3[epsilon1][notdef] , B: [notdef]3[epsilon1], [epsilon1], 3[epsilon1] 1[notdef] , C: [notdef]4[epsilon1] 1, 1, 2[epsilon1] + 2[notdef] . (8.4) As it should be, the sum of all eigenvalues is zero. The right and left eigenvectors of the matrices A and C, corresponding to the eigenvalues 3[epsilon1] 1 and 2[epsilon1] + 2, respectively, are u = (0, 1, 0)T , v = (2(1 + 5[epsilon1]), 1, 0) . (8.5)
Since vu=1 [negationslash]= 0, the points x=0 and x=1 can be mutually balanced via B(uv, 0, 1, x).
After the transformation we have the same form (8.1) with
A([epsilon1])=
0
B
B
@
JHEP04(2015)108
[epsilon1] 1
[epsilon1]5[epsilon1]+1
[epsilon1]1
5[epsilon1]+1
[epsilon1] [epsilon1] 5[epsilon1]+1
[epsilon1]+1 5[epsilon1]+1
, B([epsilon1])=
0
B
B
@
.
40[epsilon1]2 2[epsilon1] 2 5[epsilon1] 2[epsilon1] 2
(2[epsilon1]+1)(5[epsilon1]+1)(6[epsilon1]+1) [epsilon1]+1
[epsilon1](2[epsilon1]+1)[epsilon1]+1 5[epsilon1] + 1
1
C
C
A
20[epsilon1] + 4 3[epsilon1] 1 6[epsilon1] + 6 4[epsilon1](2[epsilon1]+1)(5[epsilon1]+1)[epsilon1]+1
[epsilon1](2[epsilon1]+1) [epsilon1]+1
[epsilon1]
1
C
C
A
(8.6)
The eigenvalues of A, B, and C are now
A: [notdef]3[epsilon1], [epsilon1], 3[epsilon1][notdef] , B: [notdef]3[epsilon1], [epsilon1], 3[epsilon1] 1[notdef] , C: [notdef]4[epsilon1] 1, 1, 2[epsilon1] + 1[notdef] . (8.7) Note that a pair of eigenvalues has been shifted towards the interval [1/2, 1/2). Now
the right and left eigenvectors of the matrices B and C, corresponding to the eigenvalues
3[epsilon1] 1 and 2[epsilon1] + 1, respectively, areu = (0, [epsilon1] + 1, [epsilon1])T , v = ((5[epsilon1] + 1)(8[epsilon1] + 3), 3[epsilon1] 1, 2[epsilon1] + 2) . (8.8)
20
Again, vu [negationslash]= 0, therefore, we can mutually balance x = 1 and x = 1 via B(uv/(vu), 1, 1, x). After the transformation we have the form (8.1) with
A([epsilon1]) =
0
B
B
B
B
@
[epsilon1] 1
[epsilon1] 5[epsilon1]+1
32[epsilon1]2 6[epsilon1]
[epsilon1](21[epsilon1]+4) 5[epsilon1]+1
([epsilon1]+1)(16[epsilon1]+3) 5[epsilon1]+1
1
C
C
C
C
A
.
[epsilon1]+15[epsilon1]+1
[epsilon1] [epsilon1] 5[epsilon1]+1
[epsilon1]+1 5[epsilon1]+1
, B([epsilon1]) =
0
B
B
B
B
@
2(4[epsilon1] 1)(5[epsilon1] + 1) 5[epsilon1] 2([epsilon1] + 1)
(2[epsilon1]+1)(5[epsilon1]+1)(6[epsilon1]+1) [epsilon1]+1
[epsilon1](2[epsilon1]+1)[epsilon1]+1 5[epsilon1] + 1
1
C
C
C
C
A
88[epsilon1]348[epsilon1]
[epsilon1](34[epsilon1]2+17[epsilon1]+2)
([epsilon1]+1)(5[epsilon1]+1)
11[epsilon1]22[epsilon1]
5[epsilon1]+1
(8.9)
The eigenvalues of A, B, and C are
A: [notdef]3[epsilon1], [epsilon1], 3[epsilon1][notdef] , B: [notdef]3[epsilon1], [epsilon1], 3[epsilon1][notdef] , C: [notdef]4[epsilon1] 1, 1, 2[epsilon1][notdef] . (8.10) Now the system is normalized at x = 0 and x = 1, but not in x = 1. In order
to normalize the system at all points, we need to perform intermediate transformation moving one unnormalized eigenvalue to another point. In particular, we may use the right and left eigenvectors of the matrices C and B, corresponding to the eigenvalues 4[epsilon1] 1
and [epsilon1], respectively, which are
u = (0, [epsilon1] + 1, 4[epsilon1] + 1)T , v = (16[epsilon1] 3, 1, 0) , (8.11) and make the transformation B(uv/(vu), 1, 1, x). After the transformation we have
A([epsilon1]) =
0
B
B
B
B
@
JHEP04(2015)108
[epsilon1] 1
[epsilon1] 5[epsilon1]+1
[epsilon1]+15[epsilon1]+1
3(5[epsilon1] + 1) 4[epsilon1]+15[epsilon1]+1
[epsilon1]+1 5[epsilon1]+1
2(4[epsilon1] 1)(5[epsilon1] + 1) 5[epsilon1] 2([epsilon1] + 1)
(2[epsilon1]+1)(5[epsilon1]+1)(6[epsilon1]+1) [epsilon1]+1
[epsilon1](2[epsilon1]+1)[epsilon1]+1 5[epsilon1] + 1
1
C
C
C
C
A
2(4[epsilon1] + 1)(19[epsilon1] + 4) 7[epsilon1] 3 2([epsilon1] + 1)
(4[epsilon1]+1)
, B([epsilon1]) =
0
B
B
B
B
@
.
(118[epsilon1]2+29[epsilon1]+1)[epsilon1]+1
8[epsilon1](4[epsilon1]+1) [epsilon1]+1
7[epsilon1] 1
1
C
C
C
C
A
(8.12)
The eigenvalues of A, B, and C are
A: [notdef]3[epsilon1], [epsilon1], 3[epsilon1][notdef] , B: [notdef]3[epsilon1], [epsilon1] 1, 3[epsilon1][notdef] , C: [notdef]4[epsilon1], 1, 2[epsilon1][notdef] . (8.13) Now it is easy to check that x = 1 and x = 1 can be mutually balanced via B(uv/(vu),
1, 1, x), where
u = (0, [epsilon1] + 1, 4[epsilon1] + 1)T , v = (2(6[epsilon1] + 1), 1, 0) (8.14) are the corresponding eigenvectors of B and C. After that we have
A([epsilon1]) =
0
B
B
B
B
@
[epsilon1] 1
[epsilon1] 5[epsilon1]+1
[epsilon1]+15[epsilon1]+1
,
2(4[epsilon1] 1)(5[epsilon1] + 1) 5[epsilon1] 2([epsilon1] + 1)
(2[epsilon1]+1)(5[epsilon1]+1)(6[epsilon1]+1) [epsilon1]+1
[epsilon1](2[epsilon1]+1)[epsilon1]+1 5[epsilon1] + 1
1
C
C
C
C
A
1
C
C
C
C
A
(8.15)
3[epsilon1] + 1 [epsilon1]
5[epsilon1]+1
[epsilon1]+1 5[epsilon1]+1
B([epsilon1]) =
0
B
B
B
B
@
2(2[epsilon1] + 1)(6[epsilon1] + 1) [epsilon1](17[epsilon1]+3)
5[epsilon1]+1
2([epsilon1]+1)(6[epsilon1]+1) 5[epsilon1]+1
(3[epsilon1]+1)(6[epsilon1]+1)(8[epsilon1]+1)[epsilon1]+1
[epsilon1](3[epsilon1]+1)(6[epsilon1]+1) ([epsilon1]+1)(5[epsilon1]+1)
27[epsilon1]2+10[epsilon1]+15[epsilon1]+1
21
with the eigenvalues
A: [notdef]3[epsilon1], [epsilon1], 3[epsilon1][notdef] , B: [notdef]3[epsilon1], [epsilon1], 3[epsilon1][notdef] , C: [notdef]4[epsilon1], 0, 2[epsilon1][notdef] . (8.16) At this stage we have succeeded to normalize all matrix residues A, B, and C. Finally, we solve the system of linear equations
A([epsilon1])[epsilon1] T = T
A([notdef]) [notdef] ,
B([epsilon1])
[epsilon1] T = T
B([notdef])
[notdef] (8.17)
with respect to the matrix elements of T. We obtain
T([epsilon1], [notdef]) =
0
B
B
@
([epsilon1] + 1)[notdef](5[notdef] + 1) 0 0
2([epsilon1] + 1)([epsilon1] [notdef])(5[notdef] + 1) [epsilon1]([epsilon1] + 1)(5[notdef] + 1) 0
(7[epsilon1] + 1)([epsilon1] [notdef])(5[notdef] + 1) [epsilon1]([epsilon1] [notdef]) [epsilon1](5[epsilon1] + 1)([notdef] + 1)
1
C
C
A
(8.18)
up to an arbitrary factor. We can now put [notdef] to any constant number provided T remains invertible (in particular, we can not put [notdef] to 0, 1, or 1/5). We choose [notdef] = 1. Making
the transformation with T([epsilon1], 1) we nally obtain the desired [epsilon1]-form:
M[notdef]2325[notdef]([epsilon1], x) = [epsilon1]
0
B
B
B
B
@
4 x+1
1 6x(x+1)
1 3x(x+1)
JHEP04(2015)108
. (8.19)
At this stage one may want to make yet another transformation with a constant matrix, which reduces one of the matrix residues to diagonal form. E.g., we can take the matrix, transforming A to diagonal form
T =
0
B
@
1 1 1
24 12 12
3 15 9
6(13x+6) x(x+1)
5(x+3)3x(x+1)
2(x6)
3x(x+1)
63(x1)x(x+1)
5x9
6x(x+1)
x18
3x(x+1)
1
C
C
C
C
A
1
C
A
. (8.20)
The resulting matrix has a somewhat simpler form:
M[notdef]2325[notdef]([epsilon1], x) = [epsilon1]
0
B
B
@
0 2x+3
x(x+1)
8 3(x+1)
1
C
C
A
x+3 x(x+1) 0
1 3(x+1)
. (8.21)
In a similar way we reduce all diagonal blocks to [epsilon1]-form. Finally, using the approach of section 7, we obtain the system
@x
eJ = [epsilon1]
5 x+1
2 x+1
1 x
S1x +
S2
x + 1
[bracketrightbigg]
eJ , (8.22)
where S1 and S2 are presented in the appendix. To avoid clutter, we do not present here the transformation matrix T. Both this matrix and the original form of the system are available upon request from the author.
22
9 Conclusion
We have presented a practical algorithm of the reduction of di erential system to [epsilon1]-form. The main tool of our approach is the transformation (2.6) which we call balance. We have shown how to construct a balance which does not increase the Poincar rank of the system at any point on the extended complex plane. Moreover, we have shown how to construct the balances which can be used to lower the Poincar rank p at the point with p > 0 and to normalize the eigenvalues of the matrix residue at the point with p = 0. The reduction to [epsilon1]-form can be divided into three stages
1. Reduction to Fuchsian form, algorithm 2.
2. Normalizing eigenvalues, algorithm 3.
3. Factoring out [epsilon1], section 6.
We have also shown how to use the block-triangular form of the system to alleviate computation. Namely, we rst apply the above three step to each diagonal block and nd the corresponding matrices Ti transforming each block to [epsilon1]-form. After the block-diagonal transformation T = diag(T1, T2, . . .) the diagonal blocks of the transformed system are in [epsilon1]-form. Then we use prescriptions of section 7 and, nally, factor out [epsilon1] from the whole system. The latter can be done in such a way as to preserve the block-diagonal structure of the system, as explained in the end of section 7.
There may be obstructions to the construction of the appropriate balance due to the orthogonality of the left and right eigenvectors. However, the appearance of obstructions is expected due to the negative solution of the 21st Hilbert problem by Bolibrukh [27]. For a Fuchsian system with normalized eigenvalues we have shown how to nd the constant transformation reducing the system to [epsilon1]-form. We have successfully applied our method to the reduction of several di erential systems. We have also checked that for the case of three-loop all-massive sunrise propagator master integrals the obstruction to the reduction appears. This obstruction naturally corresponds to the fact that these master integrals can not be expressed in terms of harmonic polylogarithms [32].
The example presented in section 8 did not require the reduction of the system to Fuchsian form, as described by algorithm 2, since all diagonal blocks have been already in Fuchsian form. Though it may be considered as a poor choice of the example, we underline, that the reduction to a Fuchsian form can, in principle, be done solely by means of the Barkatou & Pgel algorithm [24, 25]. Thus, a demonstration of the viability of our algorithm for this stage is not very crucial. On the other hand, the system (8.1) is not of the form assumed in refs. [14, 21] and, therefore, its reduction to [epsilon1]-form with the tools developed in the present paper seems to be quite expository.
Finally, we note that, though it is possible to make the reduction manually, it is very desirable to automatize the process as much as possible. A dedicated Mathematica package is being developed now and will be presented elsewhere.
23
JHEP04(2015)108
Acknowledgments
The work has been supported in part by the Ministry of Education and Science of the Russian Federation and the RFBR grants nos. 13-02-01023 and 15-02-07893. I am grateful to Thomas Gehrmann, Johannes Henn, and Andrei Pomeransky for the interest to the work and useful discussions. Special thanks go to Andrei Pomeransky for pointing out ref. [28], which triggered the idea of using balances for the reduction. Im grateful to Vladimir Smirnov for pointing out some typos in the preliminary version of the paper. I appreciate kind hospitality of the Physics Department of Zrich University where this work has been nished.
Note added in proof. After this paper has been nished, lecture notes on di erential equations method by Henn [33] have been published. These lecture notes contain extended review of the approach of ref. [9]. In particular, the choice of the integrals with homogeneous transcendental weight is discussed in detail.
A The form of matrices S1 and S2.
S1 =
0
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
@
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 6 0 12 0 0 2 0 0 0 0 0 0 0 0 3 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
2 0 0 0 0 0 0 0
1
JHEP04(2015)108
,
2 0 0 0 0 0 0 0 0 3 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
1
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
A
S2 =
0
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
@
1
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
A
.
3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 1 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 2
3 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4
3
2
3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
3
1
3 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0
0 0 2 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 36 0 0 0 0 0 0 0 0 9 6 0 0 0 0 0 0 0 0 36 0 0 18 3 0 72 0 0 0 0 0 0 0 0 12 9 0 0 0 0 0 0 0 0
0 0 0 4 4 0 0 12 0 0 0 0 0 0 0 0 0 2 2 0 0 0 0 0 0
0 0 0 4 2 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0
0 0 0 0 0 0 12 0 0 0
3
25 0 0 0 0 0 0 0 0
1
7 1 3 0 0 0
24
25 80 7 0 4 0
158
15 75 0 0 0 0 0 0 0
7
5 2 1 0 0 0
78 5 36 36 16 16
127
306
5 60 7 0 2 0
22
75 75 0 0 0 0 0 0 0
1
3 0 0 0
54 5 48 48 12 12
658
29
5 1 36 0 0 0 0
0 0 0 0
1
17
5
2 72 36 0
29
1 216 0 0 0 2 0
1
3
0
5
5
9 3 10 0 0 0 0 0
5
5
4
5
5
25 3 3 9 3 9 9 0 0 0 0 0 1
8
3
0 35
49
27 2 72 0 0 0 0
5
145 36 2
25
43 145 36 0 0 0 5 2 0
24
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/
Web End =CC-BY 4.0 ), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
References
[1] A.V. Kotikov, Di erential equations method: New technique for massive Feynman diagrams calculation, http://dx.doi.org/10.1016/0370-2693(91)90413-K
Web End =Phys. Lett. B 254 (1991) 158 [http://inspirehep.net/search?p=find+J+Phys.Lett.,B254,158
Web End =INSPIRE ].
[2] A.V. Kotikov, Di erential equations method: The Calculation of vertex type Feynman diagrams, http://dx.doi.org/10.1016/0370-2693(91)90834-D
Web End =Phys. Lett. B 259 (1991) 314 [http://inspirehep.net/search?p=find+J+Phys.Lett.,B259,314
Web End =INSPIRE ].
[3] A.V. Kotikov, Di erential equation method: The Calculation of N point Feynman diagrams, http://dx.doi.org/10.1016/0370-2693(91)90536-Y
Web End =Phys. Lett. B 267 (1991) 123 [http://inspirehep.net/search?p=find+J+Phys.Lett.,B267,123
Web End =INSPIRE ].
[4] E. Remiddi, Di erential equations for Feynman graph amplitudes, Nuovo Cim. A 110 (1997) 1435 [http://arxiv.org/abs/hep-th/9711188
Web End =hep-th/9711188 ] [http://inspirehep.net/search?p=find+EPRINT+hep-th/9711188
Web End =INSPIRE ].
[5] T. Gehrmann and E. Remiddi, Di erential equations for two loop four point functions, http://dx.doi.org/10.1016/S0550-3213(00)00223-6
Web End =Nucl. http://dx.doi.org/10.1016/S0550-3213(00)00223-6
Web End =Phys. B 580 (2000) 485 [http://arxiv.org/abs/hep-ph/9912329
Web End =hep-ph/9912329 ] [http://inspirehep.net/search?p=find+EPRINT+hep-ph/9912329
Web End =INSPIRE ].
[6] F.V. Tkachov, A Theorem on Analytical Calculability of Four Loop Renormalization Group Functions, http://dx.doi.org/10.1016/0370-2693(81)90288-4
Web End =Phys. Lett. B 100 (1981) 65 [http://inspirehep.net/search?p=find+J+Phys.Lett.,B100,65
Web End =INSPIRE ].
[7] K.G. Chetyrkin and F.V. Tkachov, Integration by Parts: The Algorithm to Calculate -functions in 4 Loops, http://dx.doi.org/10.1016/0550-3213(81)90199-1
Web End =Nucl. Phys. B 192 (1981) 159 [http://inspirehep.net/search?p=find+J+Nucl.Phys.,B192,159
Web End =INSPIRE ].
[8] S. Laporta, High precision calculation of multiloop Feynman integrals by di erence equations, http://dx.doi.org/10.1016/S0217-751X(00)00215-7
Web End =Int. J. Mod. Phys. A 15 (2000) 5087 [http://arxiv.org/abs/hep-ph/0102033
Web End =hep-ph/0102033 ] [http://inspirehep.net/search?p=find+J+Int.J.Mod.Phys.,A15,5087
Web End =INSPIRE ].
[9] J.M. Henn, Multiloop integrals in dimensional regularization made simple, http://dx.doi.org/10.1103/PhysRevLett.110.251601
Web End =Phys. Rev. Lett. http://dx.doi.org/10.1103/PhysRevLett.110.251601
Web End =110 (2013) 251601 [http://arxiv.org/abs/1304.1806
Web End =arXiv:1304.1806 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1304.1806
Web End =INSPIRE ].
[10] J.M. Henn, A.V. Smirnov and V.A. Smirnov, Analytic results for planar three-loop four-point integrals from a Knizhnik-Zamolodchikov equation, http://dx.doi.org/10.1007/JHEP07(2013)128
Web End =JHEP 07 (2013) 128 [http://arxiv.org/abs/1306.2799
Web End =arXiv:1306.2799 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1306.2799
Web End =INSPIRE ].
[11] J.M. Henn and V.A. Smirnov, Analytic results for two-loop master integrals for Bhabha scattering I, http://dx.doi.org/10.1007/JHEP11(2013)041
Web End =JHEP 11 (2013) 041 [http://arxiv.org/abs/1307.4083
Web End =arXiv:1307.4083 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1307.4083
Web End =INSPIRE ].
[12] J.M. Henn, A.V. Smirnov and V.A. Smirnov, Evaluating single-scale and/or non-planar diagrams by di erential equations, http://dx.doi.org/10.1007/JHEP03(2014)088
Web End =JHEP 03 (2014) 088 [http://arxiv.org/abs/1312.2588
Web End =arXiv:1312.2588 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1312.2588
Web End =INSPIRE ].
[13] S. Caron-Huot and J.M. Henn, Iterative structure of nite loop integrals, http://dx.doi.org/10.1007/JHEP06(2014)114
Web End =JHEP 06 (2014) http://dx.doi.org/10.1007/JHEP06(2014)114
Web End =114 [http://arxiv.org/abs/1404.2922
Web End =arXiv:1404.2922 ] [http://inspirehep.net/search?p=find+J+JHEP,1406,114
Web End =INSPIRE ].
[14] T. Gehrmann, A. von Manteu el, L. Tancredi and E. Weihs, The two-loop master integrals for qq ! V V , http://dx.doi.org/10.1007/JHEP06(2014)032
Web End =JHEP 06 (2014) 032 [http://arxiv.org/abs/1404.4853
Web End =arXiv:1404.4853 ] [
http://inspirehep.net/search?p=find+EPRINT+arXiv:1404.4853
Web End =INSPIRE ].
[15] A. Grozin, J.M. Henn, G.P. Korchemsky and P. Marquard, Three Loop Cusp Anomalous Dimension in QCD, http://dx.doi.org/10.1103/PhysRevLett.114.062006
Web End =Phys. Rev. Lett. 114 (2015) 062006 [http://arxiv.org/abs/1409.0023
Web End =arXiv:1409.0023 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1409.0023
Web End =INSPIRE ].
[16] S. Di Vita, P. Mastrolia, U. Schubert and V. Yundin, Three-loop master integrals for ladder-box diagrams with one massive leg, http://dx.doi.org/10.1007/JHEP09(2014)148
Web End =JHEP 09 (2014) 148 [http://arxiv.org/abs/1408.3107
Web End =arXiv:1408.3107 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1408.3107
Web End =INSPIRE ].
[17] M. Hschele, J. Ho and T. Ueda, Adequate bases of phase space master integrals for gg ! h
at NNLO and beyond, http://dx.doi.org/10.1007/JHEP09(2014)116
Web End =JHEP 09 (2014) 116 [http://arxiv.org/abs/1407.4049
Web End =arXiv:1407.4049 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1407.4049
Web End =INSPIRE ].
25
JHEP04(2015)108
[18] Y. Li, A. von Manteu el, R.M. Schabinger and H.X. Zhu, N3LO Higgs boson and Drell-Yan production at threshold: The one-loop two-emission contribution, http://dx.doi.org/10.1103/PhysRevD.90.053006
Web End =Phys. Rev. D 90 (2014) http://dx.doi.org/10.1103/PhysRevD.90.053006
Web End =053006 [http://arxiv.org/abs/1404.5839
Web End =arXiv:1404.5839 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1404.5839
Web End =INSPIRE ].
[19] A. von Manteu el, R.M. Schabinger and H.X. Zhu, The two-loop soft function for heavy quark pair production at future linear colliders, http://arxiv.org/abs/1408.5134
Web End =arXiv:1408.5134 [http://inspirehep.net/search?p=find+EPRINT+arXiv:1408.5134
Web End =INSPIRE ].
[20] G. Bell and T. Huber, Master integrals for the two-loop penguin contribution in non-leptonic B-decays, http://dx.doi.org/10.1007/JHEP12(2014)129
Web End =JHEP 12 (2014) 129 [http://arxiv.org/abs/1410.2804
Web End =arXiv:1410.2804 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1410.2804
Web End =INSPIRE ].
[21] M. Argeri et al., Magnus and Dyson Series for Master Integrals, http://dx.doi.org/10.1007/JHEP03(2014)082
Web End =JHEP 03 (2014) 082 [http://arxiv.org/abs/1401.2979
Web End =arXiv:1401.2979 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1401.2979
Web End =INSPIRE ].
[22] H. Rhrl, Holomorphic ber bundles over riemann surfaces, http://dx.doi.org/10.1090/S0002-9904-1962-10715-0
Web End =Bull. Am. Math. Soc. 68 (1962) 125.
[23] H. Rhrl, Das Riemann-Hilbertsche Problem der Theorie der linearen Di erentialgleichungen, http://dx.doi.org/10.1007/BF01343983
Web End =Math. Ann. 133 (1957) 1 [http://arxiv.org/abs/1109.1403
Web End =arXiv:1109.1403 ] [http://inspirehep.net/search?p=find+Math.Ann.,133,1
Web End =INSPIRE ].
[24] M.A. Barkatou and E. Pgel, On the Moser-and super-reduction algorithms of systems of linear di erential equations and their complexity, http://dx.doi.org/10.1016/j.jsc.2009.01.002
Web End =J. Symbolic Comput. 44 (2009) 1017 .
[25] M.A. Barkatou and E. Pgel, Computing super-irreducible forms of systems of linear di erential equations via Moser-reduction: a new approach, in proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation, Waterloo, Ontario, Canada, July 29 - August 1 2007, http://dx.doi.org/10.1145/1277548.1277550
Web End =ACM , New York U.S.A. (2007), pp. 18.
[26] J. Moser, The order of a singularity in Fuchs theory, http://dx.doi.org/10.1007/BF01162962
Web End =Math. Z. 72 (1960) 379 .
[27] A.A. Bolibrukh, The Riemann-Hilbert problem on the complex projective line, Mat. Zametki 46 (1989) 118.
[28] V. Zakharov, S. Manakov, S. Novikov and L. Pitaevsky, Soliton theory, in The inverse problem method, Nauka, Moscow Russia (1980).
[29] I. Gohberg, P. Lancaster and L. Rodman, Invariant subspaces of matrices with applications, Classics in Applied Mathematics (Book 51), http://dx.doi.org/10.1137/1.9780898719093
Web End =SIAM (1986).
[30] R.N. Lee, LiteRed 1.4: a powerful tool for reduction of multiloop integrals, http://dx.doi.org/10.1088/1742-6596/523/1/012059
Web End =J. Phys. Conf. http://dx.doi.org/10.1088/1742-6596/523/1/012059
Web End =Ser. 523 (2014) 012059 [http://arxiv.org/abs/1310.1145
Web End =arXiv:1310.1145 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1310.1145
Web End =INSPIRE ].
[31] R.N. Lee, Presenting LiteRed: a tool for the Loop InTEgrals REDuction, http://arxiv.org/abs/1212.2685
Web End =arXiv:1212.2685 [ http://inspirehep.net/search?p=find+EPRINT+arXiv:1212.2685
Web End =INSPIRE ].
[32] S. Bloch, M. Kerr and P. Vanhove, A Feynman integral via higher normal functions, http://arxiv.org/abs/1406.2664
Web End =arXiv:1406.2664 [http://inspirehep.net/search?p=find+EPRINT+arXiv:1406.2664
Web End =INSPIRE ].
[33] J.M. Henn, Lectures on di erential equations for Feynman integrals, http://dx.doi.org/10.1088/1751-8113/48/15/153001
Web End =J. Phys. A 48 (2015) http://dx.doi.org/10.1088/1751-8113/48/15/153001
Web End =153001 [http://arxiv.org/abs/1412.2296
Web End =arXiv:1412.2296 ] [http://inspirehep.net/search?p=find+EPRINT+arXiv:1412.2296
Web End =INSPIRE ].
26
JHEP04(2015)108
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
SISSA, Trieste, Italy 2015
Abstract
We present an algorithm of the reduction of the differential equations for master integrals the Fuchsian form with the right-hand side matrix linearly depending on dimensional regularization parameter . We consider linear transformations of the functions column which are rational in the variable and in . Apart from some degenerate cases described below, the algorithm allows one to obtain the required transformation or to ascertain irreducibility to the form required. Degenerate cases are quite anticipated and likely to correspond to irreducible systems.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer