Daya K. Nagar 1 and Raúl Alejandro Moran-Vasquez 1 and Arjun K. Gupta 2
Academic Editor:Biren N. Mandal
1, Instituto de Matematicas, Universidad de Antioquia, Calle 67, No. 53-108, Medellin, Colombia
2, Department of Mathematics and Statistics, Bowling Green State University, Bowling Green, OH 43403-0221, USA
Received 7 July 2014; Accepted 22 December 2014; 20 January 2015
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
The classical beta function, denoted by [figure omitted; refer to PDF] , is defined (Luke [1]) by the integral [figure omitted; refer to PDF] Based on the beta function, the Gauss hypergeometric function, denoted by [figure omitted; refer to PDF] , and the confluent hypergeometric function, denoted by [figure omitted; refer to PDF] , for [figure omitted; refer to PDF] , are defined as (Luke [1]) [figure omitted; refer to PDF] Further, using the series expansions of [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] in (2) and (3), respectively, series representations of hypergeometric functions [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , for [figure omitted; refer to PDF] , are obtained as [figure omitted; refer to PDF] respectively.
From the confluent hypergeometric function [figure omitted; refer to PDF] , the Whittaker function (Whittaker and Watson [2]) [figure omitted; refer to PDF] is defined as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] . Most of the properties and integral representations of the Whittaker function can be proved from those of the confluent hypergeometric function.
In 1997, Chaudhry et al. [3] extended the classical beta function to the whole complex plane by introducing in the integrand of (1) the exponential factor [figure omitted; refer to PDF] with [figure omitted; refer to PDF] . Thus, the extended beta function is defined as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] . If we take [figure omitted; refer to PDF] in (7), then for [figure omitted; refer to PDF] and [figure omitted; refer to PDF] we have [figure omitted; refer to PDF] . Further, replacing [figure omitted; refer to PDF] by [figure omitted; refer to PDF] in (7), one can see that [figure omitted; refer to PDF] . The rationale and justification for introducing this function are given in Chaudhry et al. [3] where several properties and a statistical application have also been studied. Miller [4] further studied this function and has given several additional results.
In 2004, Chaudhry et al. [5] presented definitions of the extended Gauss hypergeometric function and the extended confluent hypergeometric function, denoted by [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , respectively. These functions were introduced by considering the extended beta function (7) instead of beta function (1) in the general term of series (4) and (5). They defined these functions as [figure omitted; refer to PDF] Using the integral representation of the extended beta function (7) in (8) and (9), the integral representations of extended hypergeometric functions, for [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , are obtained as [figure omitted; refer to PDF]
Substituting [figure omitted; refer to PDF] in (8) or (10), we have [figure omitted; refer to PDF] ; that is, the classical Gauss hypergeometric function is a special case of the extended Gauss hypergeometric function. Similarly, by taking [figure omitted; refer to PDF] in (9) or (11), we have [figure omitted; refer to PDF] , which means that the classical confluent hypergeometric function is a special case of the extended confluent hypergeometric function. Chaudhry et al. [5] found that the extended hypergeometric functions are related to the extended beta, Bessel, and Whittaker functions and also gave several alternative integral representations.
The classical functions, such as gamma, beta, confluent hypergeometric, Gauss hypergeometric, Bessel, and Whittaker, have been generalized to the matrix case and their properties have been studied extensively. For example, see Butler and Wood [6-8], Herz [9], Constantine [10], James [11], Muirhead [12], and Gupta and Nagar [13]. Many distributions of random matrices and their functions such as determinant and trace and moments of test statistics can be expressed in terms of hypergeometric functions of matrix arguments. For some recent work, the reader is referred to Bekker et al. [14, 15], Bekker et al. [16], and Gupta and Nagar [17]. Recently, Nagar et al. [18] have defined and studied the extended beta function of matrix argument.
The extended Gauss hypergeometric function and extended confluent hypergeometric function have not been generalized to the matrix case and therefore the main objective of this work is to define these generalizations, give various integral representations, study their properties, and establish their relationships with other special functions of matrix argument.
This paper is divided into eight sections. Section 2 deals with some well-known definitions and results on matrix algebra, multivariate gamma function, multivariate beta function, and special functions. In Section 3, the extended Gauss hypergeometric function of matrix argument has been defined and its properties have been studied. Section 4 deals with the extended confluent hypergeometric function of matrix argument and Section 5 defines the extended Whittaker function of matrix argument. Section 6 is devoted to several integrals involving these newly defined functions. The results contained in this section show the relationship of these functions with some known special functions. Finally, Sections 7 and 8 give a number of matrix variate distributions.
2. Some Known Definitions and Results
This section provides definitions and important properties of some classical special functions that are critical to the development of this work.
Replacing the confluent hypergeometric function that appears in (6) by its integral representation (3), we obtain the integral representation of the Whittaker function as [figure omitted; refer to PDF]
Another integral representation of [figure omitted; refer to PDF] is obtained by substituting [figure omitted; refer to PDF] in (12), to get [figure omitted; refer to PDF] Further, application of Kummer's transformation, namely, [figure omitted; refer to PDF] in (6) yields [figure omitted; refer to PDF]
Let [figure omitted; refer to PDF] be an [figure omitted; refer to PDF] matrix of real or complex numbers. Then, [figure omitted; refer to PDF] denotes the transpose of [figure omitted; refer to PDF] ; [figure omitted; refer to PDF] ; [figure omitted; refer to PDF] ; [figure omitted; refer to PDF] determinant of [figure omitted; refer to PDF] means that [figure omitted; refer to PDF] is symmetric positive semidefinite; [figure omitted; refer to PDF] means that [figure omitted; refer to PDF] is symmetric positive definite, [figure omitted; refer to PDF] means that both [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are symmetric positive definite, and [figure omitted; refer to PDF] denotes the unique positive definite square root of [figure omitted; refer to PDF] .
Several generalizations of Euler's gamma function are available in the scientific literature. The multivariate gamma function which is frequently used in multivariate statistical analysis is defined by (Ingham [19] and Siegel [20]) [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and the integration is carried out over [figure omitted; refer to PDF] symmetric positive definite matrices. By evaluating the above integral, it is easy to see that [figure omitted; refer to PDF] Let [figure omitted; refer to PDF] be an [figure omitted; refer to PDF] symmetric positive definite matrix and make the transformation [figure omitted; refer to PDF] , where [figure omitted; refer to PDF] is the positive definite square root of [figure omitted; refer to PDF] , with the Jacobian [figure omitted; refer to PDF] . Then, [figure omitted; refer to PDF] The above result also holds for complex symmetric [figure omitted; refer to PDF] with [figure omitted; refer to PDF] by analytic continuation. The multivariate generalization of the beta function is given by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] .
Siegel [20] established the identity [figure omitted; refer to PDF] which can be derived from (19) by using the matrix transformation [figure omitted; refer to PDF] with the Jacobian [figure omitted; refer to PDF] .
The type 3 Bessel function of Herz (Herz [9, p. 517, p. 506]), [figure omitted; refer to PDF] , of [figure omitted; refer to PDF] symmetric positive definite matrix argument [figure omitted; refer to PDF] is defined by [figure omitted; refer to PDF]
The Gauss hypergeometric function of [figure omitted; refer to PDF] symmetric matrix argument [figure omitted; refer to PDF] , denoted by [figure omitted; refer to PDF] , is defined by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] . The confluent hypergeometric function of [figure omitted; refer to PDF] symmetric matrix argument [figure omitted; refer to PDF] , denoted by [figure omitted; refer to PDF] , is defined by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . If we make the transformation [figure omitted; refer to PDF] in (22) and (23) with the Jacobian [figure omitted; refer to PDF] , we obtain alternative integral representations for [figure omitted; refer to PDF] and [figure omitted; refer to PDF] as [figure omitted; refer to PDF] Further, substituting [figure omitted; refer to PDF] with the Jacobian [figure omitted; refer to PDF] in (24), we get another interesting integral form of [figure omitted; refer to PDF] as [figure omitted; refer to PDF] Putting [figure omitted; refer to PDF] in (22) and evaluating the resulting integral using (19), one obtains [figure omitted; refer to PDF] where [figure omitted; refer to PDF] . Putting [figure omitted; refer to PDF] in (26) and using (20), one can easily show that [figure omitted; refer to PDF] Transforming [figure omitted; refer to PDF] , (23) becomes [figure omitted; refer to PDF] A comparison of (23) and (29) leads to the well-known Kummer's relation (Herz [9, Eq. 2.8, p. 488]): [figure omitted; refer to PDF] Further, by using the transformation [figure omitted; refer to PDF] , (23) can be written as [figure omitted; refer to PDF]
For properties and further results on these functions, the reader is referred to Constantine [10], James [11], Muirhead [12], and Gupta and Nagar [13]. The numerical computation of a hypergeometric function of matrix arguments is very difficult. However, some numerical methods are proposed in recent years; see Hashiguchi et al. [21] and Koev and Edelman [22].
Also, in 1968, Abdi [23] defined the Whittaker function of matrix argument expressing it in terms of a confluent hypergeometric function of matrix argument [figure omitted; refer to PDF] as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] . He also studied several properties and integral representations of this function. It is apparent that, by using different integral representations of [figure omitted; refer to PDF] in (32), a variety of integral representations for [figure omitted; refer to PDF] can be obtained. For example, using (31) in (32), we get [figure omitted; refer to PDF]
Next, we give definition and properties of the extended beta function of matrix argument due to Nagar et al. [18].
Definition 1.
The extended matrix variate beta function, denoted by [figure omitted; refer to PDF] , is defined as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are arbitrary complex numbers and [figure omitted; refer to PDF] .
From the definition, it is apparent that the function [figure omitted; refer to PDF] is invariant under the transformation [figure omitted; refer to PDF] , [figure omitted; refer to PDF] ; thereby, [figure omitted; refer to PDF] is a function of the eigenvalues of the matrix [figure omitted; refer to PDF] . If we take [figure omitted; refer to PDF] in (34), then for [figure omitted; refer to PDF] and [figure omitted; refer to PDF] we have [figure omitted; refer to PDF] . Further, replacing [figure omitted; refer to PDF] by [figure omitted; refer to PDF] in (34), one can show that [figure omitted; refer to PDF] .
Now, applying the transformation [figure omitted; refer to PDF] in (34) with the Jacobian [figure omitted; refer to PDF] , we arrive at [figure omitted; refer to PDF] If we take [figure omitted; refer to PDF] in (35) and compare the resulting expression with (21), we obtain an interesting relation between the extended matrix variate beta function and the type 3 Bessel function of Herz as [figure omitted; refer to PDF] Also, from (20) and (35), one can prove the inequality [figure omitted; refer to PDF]
Let [figure omitted; refer to PDF] be a scalar valued function of an [figure omitted; refer to PDF] symmetric positive definite matrix [figure omitted; refer to PDF] such that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] . Then, the [figure omitted; refer to PDF] -transform of [figure omitted; refer to PDF] , denoted by [figure omitted; refer to PDF] , is defined by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] .
The [figure omitted; refer to PDF] -transform of the extended beta function of the matrix argument is given by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] .
3. Extended Gauss Hypergeometric Function of Matrix Argument
In this section, we define the extended Gauss hypergeometric function of matrix argument (EGHFMA), which is a matrix variate generalization of the extended Gauss hypergeometric function (10) and an extended form of the classical Gauss hypergeometric function of matrix argument defined in (22). We also give several integral representations and properties of this function.
Definition 2.
The extended Gauss hypergeometric function of matrix argument (EGHFMA), denoted by [figure omitted; refer to PDF] , is defined for an [figure omitted; refer to PDF] symmetric matrix [figure omitted; refer to PDF] as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] .
If we take [figure omitted; refer to PDF] in (40), then EGHFMA reduces to a classical Gauss hypergeometric function of matrix argument (22); that is, [figure omitted; refer to PDF] . Also, if we consider [figure omitted; refer to PDF] in (40) and compare the resulting expression with representation (34), we find that the extended beta function of matrix argument and EGHFMA are connected by the expression [figure omitted; refer to PDF] Further, substituting [figure omitted; refer to PDF] in (41) and using (36), we obtain [figure omitted; refer to PDF]
Theorem 3.
For [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , we have [figure omitted; refer to PDF] . That is, [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , is a function of the eigenvalues of the matrix [figure omitted; refer to PDF] . Further, for [figure omitted; refer to PDF] , [figure omitted; refer to PDF] which indicates that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , is a function of the eigenvalues of the matrix [figure omitted; refer to PDF] .
Proof.
Substituting [figure omitted; refer to PDF] with [figure omitted; refer to PDF] and replacing [figure omitted; refer to PDF] by [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , in (40), we arrive at [figure omitted; refer to PDF] where the last line has been obtained by substituting [figure omitted; refer to PDF] with the Jacobian [figure omitted; refer to PDF] and using (40). This means that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , is a function of the eigenvalues of the matrix [figure omitted; refer to PDF] . Similarly, if in (40) we take [figure omitted; refer to PDF] with [figure omitted; refer to PDF] and [figure omitted; refer to PDF] is replaced by [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , we obtain [figure omitted; refer to PDF] which shows that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , is a function of the eigenvalues of the matrix [figure omitted; refer to PDF] .
The following theorem gives an extended form of the integral representation given in (24).
Theorem 4.
Let [figure omitted; refer to PDF] be an [figure omitted; refer to PDF] symmetric matrix such that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] . Then, [figure omitted; refer to PDF]
Proof.
In the integral representation of EGHFMA given in (40), substituting [figure omitted; refer to PDF] with the Jacobian [figure omitted; refer to PDF] , we obtain the desired result.
Theorem 5.
If [figure omitted; refer to PDF] is an [figure omitted; refer to PDF] symmetric matrix such that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
From the Trace Inequality given in Abadir and Magnus [24, p. 338], it follows that [figure omitted; refer to PDF] which implies that [figure omitted; refer to PDF] and [figure omitted; refer to PDF] Now, using the above inequality in the integral given in (45), we get [figure omitted; refer to PDF] where the last line has been obtained by using (24). If [figure omitted; refer to PDF] is an [figure omitted; refer to PDF] positive definite matrix, then it has been shown in Abadir and Magnus [24, p. 333] that [figure omitted; refer to PDF] . This inequality, for [figure omitted; refer to PDF] , yields [figure omitted; refer to PDF] which gives the second part of the inequality.
If we take [figure omitted; refer to PDF] in (46) and then use (41) and (27) in the resulting expression, we obtain [figure omitted; refer to PDF] where [figure omitted; refer to PDF] .
The following theorem gives [figure omitted; refer to PDF] -transform of the extended matrix variate Gauss hypergeometric function [figure omitted; refer to PDF] .
Theorem 6.
If [figure omitted; refer to PDF] is an [figure omitted; refer to PDF] symmetric matrix such that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Replacing [figure omitted; refer to PDF] by its integral representation given in (40) and changing the order of integration, we get [figure omitted; refer to PDF] Now, using (18), we arrive at [figure omitted; refer to PDF] Finally, the last integral is replaced by the Gauss hypergeometric function of matrix argument by using representation (22).
Substitution of [figure omitted; refer to PDF] in (51) gives the following interesting relationship between EGHFMA and classical Gauss hypergeometric function of matrix argument: [figure omitted; refer to PDF] Also, if we take [figure omitted; refer to PDF] in (51) and then use (41) and (27) in the resulting expression, we obtain the [figure omitted; refer to PDF] -transform of the extended beta function of matrix argument as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] with [figure omitted; refer to PDF] .
The transformation formula for the extended Gauss hypergeometric function of matrix argument is given next.
Theorem 7.
If [figure omitted; refer to PDF] is an [figure omitted; refer to PDF] symmetric matrix such that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Making the transformation [figure omitted; refer to PDF] in the integral representation given in (40), one obtains [figure omitted; refer to PDF] Now, writing [figure omitted; refer to PDF] in (57) and noting that [figure omitted; refer to PDF] , we have [figure omitted; refer to PDF] Finally, evaluating the above integral by using (40), we get the desired result.
It is noteworthy that [figure omitted; refer to PDF] in (56) gives the well-known transformation formula [figure omitted; refer to PDF]
4. Extended Confluent Hypergeometric Function of Matrix Argument
In this section, we define and study the extended confluent hypergeometric function of matrix argument (ECHFMA), which is a generalization to the matrix case of the extended confluent hypergeometric function [figure omitted; refer to PDF] .
Definition 8.
The extended confluent hypergeometric function of an [figure omitted; refer to PDF] symmetric matrix argument (ECHFMA), denoted by [figure omitted; refer to PDF] , is defined as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] .
If we take [figure omitted; refer to PDF] in (61), then ECHFMA becomes the confluent hypergeometric function of matrix argument; that is, [figure omitted; refer to PDF] . Also, if we put [figure omitted; refer to PDF] in (61) and compare the resulting expression with (34), we will arrive at the conclusion that the ECHFMA and extended beta function of matrix argument retain the relationship [figure omitted; refer to PDF]
Theorem 9.
If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF] , [figure omitted; refer to PDF] . That is, [figure omitted; refer to PDF] is a function of the eigenvalues of the matrix [figure omitted; refer to PDF] . Further, for [figure omitted; refer to PDF] , [figure omitted; refer to PDF] which indicates that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , is a function of the eigenvalues of the matrix [figure omitted; refer to PDF] .
Proof.
The proof is similar to the proof of Theorem 3.
Theorem 10.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices with [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
In the integral representation of the ECHFMA given in (61), consider the substitution [figure omitted; refer to PDF] with the Jacobian [figure omitted; refer to PDF] .
Corollary 11.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices with [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
The desired result is obtained by evaluating the integral in (63) by using (61).
For [figure omitted; refer to PDF] , expression (64) reduces to the well-known Kummer's relation for the classical confluent hypergeometric function of matrix argument. Moreover, the previous corollary is the generalization to the matrix case of Kummer's relation for the extended confluent hypergeometric function of scalar argument.
Theorem 12.
If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are [figure omitted; refer to PDF] symmetric matrices with [figure omitted; refer to PDF] .
Proof.
In the integral of the ECHFMA given in (61), consider the transformation [figure omitted; refer to PDF] , whose Jacobian is [figure omitted; refer to PDF] .
If we take [figure omitted; refer to PDF] in (65), we arrive at representation (25) of the classical confluent hypergeometric function of matrix argument.
Theorem 13.
Let [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices with [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF] where [figure omitted; refer to PDF] .
Proof.
Transforming [figure omitted; refer to PDF] with the Jacobian [figure omitted; refer to PDF] in representation (61), we obtain the result.
If we consider [figure omitted; refer to PDF] and [figure omitted; refer to PDF] in the above theorem, then we have [figure omitted; refer to PDF]
Theorem 14.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices, [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
The proof is similar to the proof of Theorem 5.
The [figure omitted; refer to PDF] -transform of the extended matrix variate confluent hypergeometric function is given next.
Theorem 15.
If [figure omitted; refer to PDF] is an [figure omitted; refer to PDF] symmetric matrix, [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Replacing [figure omitted; refer to PDF] by its equivalent integral representation given in (61) and changing the order of integration, the integral in (69) is rewritten as [figure omitted; refer to PDF] where the last line has been obtained by using (18). Finally, evaluating (70) using the definition of the confluent hypergeometric function of matrix argument, we get the desired result.
By putting [figure omitted; refer to PDF] , in (69), we get an interesting relation: [figure omitted; refer to PDF]
5. Extended Whittaker Function of Matrix Argument
This section gives the definition of the extended Whittaker function of matrix argument, which is a generalization of the Whittaker function of matrix argument given in (32). Several properties and integral representations of this function are also derived.
Definition 16.
The extended Whittaker function of matrix argument (EWFMA), denoted by [figure omitted; refer to PDF] , is defined for an [figure omitted; refer to PDF] symmetric matrix [figure omitted; refer to PDF] as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] .
If we consider [figure omitted; refer to PDF] in (72), then the extended Whittaker function of matrix argument reduces to the classical Whittaker function of matrix argument given in (32); that is, [figure omitted; refer to PDF] . Several properties of the extended Whittaker function of matrix argument are inherited from the ECHFMA, so, as a consequence of Theorem 9, we have [figure omitted; refer to PDF] which indicates that the function [figure omitted; refer to PDF] with [figure omitted; refer to PDF] depends on the matrix [figure omitted; refer to PDF] only through its eigenvalues. Similarly, [figure omitted; refer to PDF] The above equation means that [figure omitted; refer to PDF] with [figure omitted; refer to PDF] depends on the matrix [figure omitted; refer to PDF] only through its eigenvalues.
An integral representation for the extended Whittaker function of matrix argument [figure omitted; refer to PDF] is obtained by replacing in (72) the integral representation of ECHFMA given in (61). In fact, [figure omitted; refer to PDF] Likewise, substitution of (67) in (72) yields the representation [figure omitted; refer to PDF] Clearly, when we take [figure omitted; refer to PDF] in the above expression, we obtain the integral representation (33) of the classical Whittaker function of matrix argument.
Theorem 17.
For [figure omitted; refer to PDF] symmetric matrix [figure omitted; refer to PDF] , [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] .
Proof.
Using transformation (64) in (72), we have [figure omitted; refer to PDF] Substituting (72) in the previous expression gives the result.
Theorem 18.
If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
The result follows by using inequality (68) in (72).
The following theorem gives the [figure omitted; refer to PDF] -transform of the extended Whittaker function of matrix argument.
Theorem 19.
If [figure omitted; refer to PDF] is an [figure omitted; refer to PDF] symmetric matrix, [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Writing [figure omitted; refer to PDF] in terms of [figure omitted; refer to PDF] using (72), one obtains [figure omitted; refer to PDF] Now, calculating the above integral by using (69) and then substituting the resulting expression in terms of Whittaker function of matrix argument, we get the final result.
Substitution of [figure omitted; refer to PDF] in the above theorem yields an interesting relationship between [figure omitted; refer to PDF] and [figure omitted; refer to PDF] as [figure omitted; refer to PDF]
6. Relationship between EGHFMA, ECHFMA, and EWFMA
In this section, we derive some results that are related to EGHFMA, ECHFMA, and EWFMA.
Theorem 20.
Let [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices such that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Using the integral representation (61) and changing the order of integration, we have [figure omitted; refer to PDF] Now, by virtue of (18), we have [figure omitted; refer to PDF] Finally, we use (40) to achieve the final result.
Corollary 21.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices such that [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Application of transformation (64) yields [figure omitted; refer to PDF] Evaluating the above integral by applying (83) and then using (41), we get the result.
Corollary 22.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices such that [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Just take [figure omitted; refer to PDF] in (86) and then use (36).
Theorem 23.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices such that [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Writing [figure omitted; refer to PDF] in terms of integral representation by using (40), taking [figure omitted; refer to PDF] , and applying the result [figure omitted; refer to PDF] we obtain the desired result.
Theorem 24.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices such that [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Using (40) and changing the order of integration, we obtain [figure omitted; refer to PDF] Now, evaluating the integral involving [figure omitted; refer to PDF] using (22) and applying (28), we have [figure omitted; refer to PDF] Finally, using representation (40), we arrive at the desired result.
Corollary 25.
For [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , we have [figure omitted; refer to PDF]
Proof.
Just take [figure omitted; refer to PDF] in (91) and then use (41).
Corollary 26.
For [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , we have [figure omitted; refer to PDF]
Proof.
Just take [figure omitted; refer to PDF] in (94) and then use (36).
Corollary 27.
For [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , we have [figure omitted; refer to PDF]
Proof.
The proof follows from Corollary 25.
Theorem 28.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices such that [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Using representation (40) and changing the order of integration, we get [figure omitted; refer to PDF] Now, using (20) to integrate with respect to [figure omitted; refer to PDF] , we get [figure omitted; refer to PDF] Finally, we use (34) to obtain the result.
Corollary 29.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices such that [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Just take [figure omitted; refer to PDF] in (97) and use relation (36).
Theorem 30.
Let [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices such that [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Using the definition of the extended Whittaker function of matrix argument given in (72), we have [figure omitted; refer to PDF] Now, we use (83) in the above integral to obtain the desired result.
Corollary 31.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices such that [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Take [figure omitted; refer to PDF] in (101) and then use (41) in the resulting expression.
Corollary 32.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be [figure omitted; refer to PDF] symmetric matrices such that [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , then [figure omitted; refer to PDF]
Proof.
Take [figure omitted; refer to PDF] in (103) and then use (36) to get the result.
7. Extended Matrix Variate Gauss Hypergeometric Function Distribution
This section defines the extended matrix variate Gauss hypergeometric function distribution which is a generalization of the matrix variate Gauss hypergeometric function distribution. We show that this distribution occurs naturally as the distribution of the matrix quotient [figure omitted; refer to PDF] , where the [figure omitted; refer to PDF] random matrices [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are independent, the random matrix [figure omitted; refer to PDF] has a matrix beta type 2 distribution, and the random matrix [figure omitted; refer to PDF] follows an extended matrix variate beta type 1 distribution.
Definition 33.
An [figure omitted; refer to PDF] positive definite random matrix [figure omitted; refer to PDF] is said to have an extended matrix variate Gauss hypergeometric function distribution with parameters [figure omitted; refer to PDF] , denoted by [figure omitted; refer to PDF] , if its pdf is given by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] .
Note that the matrix variate Gauss hypergeometric function distribution (see Gupta and Nagar [13]) can be obtained from (105) by substituting [figure omitted; refer to PDF] and putting an additional condition [figure omitted; refer to PDF] .
Theorem 36 derives the extended matrix variate Gauss hypergeometric function distribution as the distribution of the matrix ratio of two independent random matrices distributed as beta type 2 and extended beta type 1. First, we define the extended matrix variate beta type 1, matrix variate beta type 1, and the matrix variate beta type 2 distributions. These definitions can be found in Gupta and Nagar [13], Nagar et al. [18], and Nagar and Roldan-Correa [25].
Definition 34.
An [figure omitted; refer to PDF] random matrix [figure omitted; refer to PDF] is said to have an extended matrix variate beta type 1 distribution with parameters [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , denoted by [figure omitted; refer to PDF] , if its pdf is given by [figure omitted; refer to PDF]
Note that, for [figure omitted; refer to PDF] , we take [figure omitted; refer to PDF] and [figure omitted; refer to PDF] and the extended matrix variate beta type 1 distribution defined by the above density slides to a matrix variate beta type 1 distribution with the pdf [figure omitted; refer to PDF] We will designate this distribution by [figure omitted; refer to PDF] .
Definition 35.
An [figure omitted; refer to PDF] random matrix [figure omitted; refer to PDF] is said to have a matrix beta type 2 distribution with parameters [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , denoted by [figure omitted; refer to PDF] , if its pdf is given by [figure omitted; refer to PDF]
Theorem 36.
Let [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be independent random matrices, where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . Then, [figure omitted; refer to PDF] .
Proof.
As [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are independent, by (106) and (108), the joint density of [figure omitted; refer to PDF] and [figure omitted; refer to PDF] is given by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . Using the transformation [figure omitted; refer to PDF] , with the Jacobian [figure omitted; refer to PDF] , we obtain the joint density as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . To find the density of [figure omitted; refer to PDF] , we integrate the above expression with respect to [figure omitted; refer to PDF] to get [figure omitted; refer to PDF] Evaluation of the above expression using (40) yields the desired result.
8. Extended Matrix Variate Confluent Hypergeometric Function Distribution
This section defines the extended matrix variate confluent hypergeometric function distribution which is a generalization of the matrix variate confluent hypergeometric function type 1 distribution. We study several properties of this new distribution and its relationship with other known matrix variate distributions. We also show that this distribution occurs naturally as the distribution of the matrix quotient [figure omitted; refer to PDF] , where the [figure omitted; refer to PDF] random matrices [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are independent, the random matrix [figure omitted; refer to PDF] has a matrix variate gamma distribution, and the random matrix [figure omitted; refer to PDF] follows an extended matrix variate beta type 1 distribution.
Definition 37.
An [figure omitted; refer to PDF] positive definite random matrix [figure omitted; refer to PDF] is said to have an extended matrix variate confluent hypergeometric function distribution with parameters [figure omitted; refer to PDF] , denoted by [figure omitted; refer to PDF] , if its pdf is given by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] .
For [figure omitted; refer to PDF] (with an additional condition [figure omitted; refer to PDF] ), (112) reduces to the matrix variate confluent hypergeometric function density (see Gupta and Nagar [13]).
The extended matrix variate confluent hypergeometric function distribution can be derived as the distribution of the matrix quotient of independent gamma and extended beta matrices as given in the following theorem. First we define the matrix variate gamma distribution. The definition of matrix variate gamma distribution can be found in Gupta and Nagar [13] and Iranmanesh et al. [26].
Definition 38.
An [figure omitted; refer to PDF] random matrix [figure omitted; refer to PDF] is said to have a matrix variate gamma distribution with parameters [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , denoted by [figure omitted; refer to PDF] , if its pdf is given by [figure omitted; refer to PDF]
Theorem 39.
If [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are independent, then [figure omitted; refer to PDF] .
Proof.
As [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are independent, from (106) and (113), the joint density of [figure omitted; refer to PDF] and [figure omitted; refer to PDF] is given by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . Making the transformation [figure omitted; refer to PDF] , with the Jacobian [figure omitted; refer to PDF] , we find the joint density of [figure omitted; refer to PDF] and [figure omitted; refer to PDF] as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . Now, the density of [figure omitted; refer to PDF] is obtained by integrating the above expression with respect to [figure omitted; refer to PDF] by using the integral representation (61).
Theorem 40.
Let the random variables [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be independent, [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] . Then, [figure omitted; refer to PDF] has the density [figure omitted; refer to PDF]
Proof.
The joint density of [figure omitted; refer to PDF] and [figure omitted; refer to PDF] is given in (109). Using the transformation [figure omitted; refer to PDF] , with the Jacobian [figure omitted; refer to PDF] , we obtain the joint density of [figure omitted; refer to PDF] and [figure omitted; refer to PDF] as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . Now, integration of [figure omitted; refer to PDF] in the above expression by using (40) yields the desired result.
Corollary 41.
Let the random variables [figure omitted; refer to PDF] and [figure omitted; refer to PDF] be independent, [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] . Then, [figure omitted; refer to PDF] has the density [figure omitted; refer to PDF]
Acknowledgment
This research work was supported by the Sistema Universitario de Investigacion, Universidad de Antioquia, by Project no. IN10164CE.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] Y. L. Luke The Special Functions and Their Approximations , vol. 1, Academic Press, New York, NY, USA, 1969.
[2] E. T. Whittaker, G. N. Watson A Course of Modern Analysis , Cambridge University Press, New York, NY, USA, 1996.
[3] M. A. Chaudhry, A. Qadir, M. Rafique, S. M. Zubair, "Extension of Euler's beta function," Journal of Computational and Applied Mathematics , vol. 78, no. 1, pp. 19-32, 1997.
[4] A. R. Miller, "Remarks on a generalized beta function," Journal of Computational and Applied Mathematics , vol. 100, no. 1, pp. 23-32, 1998.
[5] M. A. Chaudhry, A. Qadir, H. M. Srivastava, R. B. Paris, "Extended hypergeometric and confluent hypergeometric functions," Applied Mathematics and Computation , vol. 159, no. 2, pp. 589-602, 2004.
[6] R. W. Butler, A. T. Wood, "Laplace approximations for hypergeometric functions with matrix argument," Annals of Statistics , vol. 30, no. 4, pp. 1155-1177, 2002.
[7] R. W. Butler, A. T. Wood, "Laplace approximation for Bessel functions of matrix argument," Journal of Computational and Applied Mathematics , vol. 155, no. 2, pp. 359-382, 2003.
[8] R. W. Butler, A. T. Wood, "Laplace approximations to hypergeometric functions of two matrix arguments," Journal of Multivariate Analysis , vol. 94, no. 1, pp. 1-18, 2005.
[9] C. S. Herz, "Bessel functions of matrix argument," Annals of Mathematics , vol. 61, no. 2, pp. 474-523, 1955.
[10] A. G. Constantine, "Some non-central distribution problems in multivariate analysis," Annals of Mathematical Statistics , vol. 34, pp. 1270-1285, 1963.
[11] A. T. James, "Distributions of matrix variates and latent roots derived from normal samples," Annals of Mathematical Statistics , vol. 35, pp. 475-501, 1964.
[12] R. J. Muirhead Aspects of Multivariate Statistical Theory , of Wiley Series in Probability and Mathematical Statistics, John Wiley & Sons, New York, NY, USA, 1982.
[13] A. K. Gupta, D. K. Nagar Matrix Variate Distributions , Chapman & Hall/CRC, Boca Raton, Fla, USA, 2000.
[14] A. Bekker, J. J. Roux, R. Ehlers, M. Arashi, "Bimatrix variate beta type IV distribution: relation to Wilks's statistic and bimatrix variate Kummer-beta type IV distribution," Communications in Statistics-Theory and Methods , vol. 40, no. 23, pp. 4165-4178, 2011.
[15] A. Bekker, J. J. J. Roux, R. Ehlers, M. Arashi, "Distribution of the product of determinants of noncentral bimatrix beta variates," Journal of Multivariate Analysis , vol. 109, pp. 73-87, 2012.
[16] A. Bekker, J. J. J. Roux, M. Arashi, "Wishart ratios with dependent structure: new members of the bimatrix beta type IV," Linear Algebra and Its Applications , vol. 435, no. 12, pp. 3243-3260, 2011.
[17] A. K. Gupta, D. K. Nagar, "Matrix-variate Gauss hypergeometric distribution," Journal of the Australian Mathematical Society , vol. 92, no. 3, pp. 335-355, 2012.
[18] D. K. Nagar, A. Roldan-Correa, A. K. Gupta, "Extended matrix variate gamma and beta functions," Journal of Multivariate Analysis , vol. 122, pp. 53-69, 2013.
[19] A. E. Ingham, "An integral which occurs in statistics," Mathematical Proceedings of the Cambridge Philosophical Society , vol. 29, pp. 271-276, 1933.
[20] C. L. Siegel, "Uber die analytische theorie der quadratischen formen," Annals of Mathematics, Second Series , vol. 36, no. 3, pp. 527-606, 1935.
[21] H. Hashiguchi, Y. Numata, N. Takayama, A. Takemura, "The holonomic gradient method for the distribution function of the largest root of a Wishart matrix," Journal of Multivariate Analysis , vol. 117, pp. 296-312, 2013.
[22] P. Koev, A. Edelman, "The efficient evaluation of the hypergeometric function of a matrix argument," Mathematics of Computation , vol. 75, no. 254, pp. 833-846, 2006.
[23] W. H. Abdi, "Whittaker's M k , μ -function of a matrix argument," Rendiconti del Circolo Matematico di Palermo, Serie II , vol. 17, no. 3, pp. 333-342, 1968.
[24] K. M. Abadir, J. R. Magnus Matrix Algebra , Cambridge University Press, New York, NY, USA, 2005.
[25] D. K. Nagar, A. Roldan-Correa, "Extended matrix variate beta distributions," Progress in Applied Mathematics , vol. 6, no. 1, pp. 40-53, 2013.
[26] A. Iranmanesh, M. Arashi, D. K. Nagar, S. M. M. Tabatabaey, "On inverted matrix variate gamma distribution," Communications in Statistics. Theory and Methods , vol. 42, no. 1, pp. 28-41, 2013.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2015 Daya K. Nagar et al. Daya K. Nagar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Hypergeometric functions of matrix arguments occur frequently in multivariate statistical analysis. In this paper, we define and study extended forms of Gauss and confluent hypergeometric functions of matrix arguments and show that they occur naturally in statistical distribution theory.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer