Academic Editor:Aera Thavaneswaran
Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Pathum Thani 12121, Thailand
Received 14 September 2015; Revised 27 October 2015; Accepted 28 October 2015
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
The reciprocal of a normal mean has been the subject of research in the areas of nuclear physics, agriculture, and economics. For example, Lamanna et al. [1] studied charged particle momentum, [figure omitted; refer to PDF] , where [figure omitted; refer to PDF] is the track curvature of a particle. The reciprocal of a normal mean is given by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the population mean. A variety of researchers have studied the reciprocal of a normal mean. For instance, Zaman [2] discussed the estimators without moments in the case of the reciprocal of a normal mean. The maximum likelihood estimate of the reciprocal of a normal mean with a class of zero-one loss functions was proposed by Zaman [3]. Withers and Nadarajah [4] presented a theorem to construct the point estimators for the inverse powers of a normal mean.
Suppose we have prior information for the coefficient of variation; [figure omitted; refer to PDF] , where [figure omitted; refer to PDF] is the standard deviation of a population. This phenomenon arises in area of agricultural, biological, environmental, and physical sciences. For instance, in environmental science, Bhat and Rao [5] explain that there are some situations that show the standard deviation of a pollutant is directly related to the mean, which means [figure omitted; refer to PDF] is known. In clinical chemistry, Bhat and Rao [5] also state that "when the batches of some substance (chemicals) are to be analyzed, if sufficient batches of the substances are analyzed, their coefficients of variation will be known." Furthermore, in medical, biological, and chemical studies, Brazauskas and Ghorai [6] provide some examples showing problems concerning coefficients of variation that are known in practice. Many statistical problems are due to the study of the mean of a normal distribution with a known coefficient of variation (see, e.g., Searls [7], Khan [8], Arnholt and Hebert [9], and Srisodaphol and Tongmol [10] and the references cited in the mentioned papers).
The estimation and testing of a normal mean with a known coefficient of variation are not equivalent to the case of known variance since the population mean is unknown. Furthermore, let [figure omitted; refer to PDF] be a random sample of size [figure omitted; refer to PDF] from a normal distribution. The estimator of [figure omitted; refer to PDF] is [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the sample mean. The distribution of [figure omitted; refer to PDF] is not a normal distribution. Therefore, we cannot construct a confidence interval for a normal mean and then transform the confidence interval for the reciprocal of a normal mean. Similarly, the hypothesis testing for a normal mean is not equivalent to the hypothesis testing for the reciprocal of a normal mean because the testing is developed based on the distribution of a sample mean.
Two confidence intervals for the reciprocal of a normal mean with a known coefficient of variation were proposed by Wongkhao et al. [11]. Their confidence intervals can be applied when the coefficient of variation of a control group is known. One of their confidence intervals was developed based on an asymptotic normality of the pivotal statistic [figure omitted; refer to PDF] , where [figure omitted; refer to PDF] follows the standard normal distribution. The other confidence interval was constructed based on the generalized confidence interval [12]. Simulation results showed that the coverage probabilities of the two confidence intervals were not significantly different. The limits of the asymptotic confidence interval are difficult to compute since they depend on an infinite summation. However, there has not yet been a study using a statistical test for the reciprocal of a normal mean with a known coefficient of variation. Therefore, we were motivated to propose two statistical tests for the reciprocal of a normal mean with a known coefficient of variation. One of the proposed statistical tests was based on an asymptotic method. The other statistical test was developed using the simple approximate expression for the expectation of the estimator of [figure omitted; refer to PDF] In addition, we also compared the empirical probability of type I errors and the empirical power of the test using a Monte Carlo simulation.
The structure of this paper is as follows: Section 2 provides the theorem and corollary, which were used for constructing the asymptotic test. An approximate test is proposed in Section 3. The performance of the two proposed statistical tests for [figure omitted; refer to PDF] is investigated through a Monte Carlo simulation study in Section 4. We then conclude this paper in Section 5.
2. Asymptotic Test for the Reciprocal of a Normal Mean with a Known Coefficient of Variation
The null hypothesis of interest is [figure omitted; refer to PDF] The theorem and corollary concerning the expectation of [figure omitted; refer to PDF] and [figure omitted; refer to PDF] proposed by Wongkhao et al. [11] were used to construct the asymptotic test as reviewed below.
Theorem 1 (Wongkhao et al. [11]).
Let [figure omitted; refer to PDF] be a random sample of size [figure omitted; refer to PDF] from a normal distribution with mean [figure omitted; refer to PDF] and variance [figure omitted; refer to PDF] The estimator of [figure omitted; refer to PDF] is [figure omitted; refer to PDF] where [figure omitted; refer to PDF] When a coefficient of variation [figure omitted; refer to PDF] is known, the expectation of [figure omitted; refer to PDF] is [figure omitted; refer to PDF]
Proof of Theorem 1.
This theorem was proved in Wongkhao et al. [11].
From (2), [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , where [figure omitted; refer to PDF] Thus, the unbiased estimator of [figure omitted; refer to PDF] is [figure omitted; refer to PDF]
Corollary 2.
From Theorem 1, [figure omitted; refer to PDF]
Proof of Corollary 2.
This corollary was proved in Wongkhao et al. [11].
From the central limit theorem, we use the fact that [figure omitted; refer to PDF] Under [figure omitted; refer to PDF] which is true, we get [figure omitted; refer to PDF] Let [figure omitted; refer to PDF] denote the upper [figure omitted; refer to PDF] quantile of the standard normal distribution. On the basis of the above standard normal distribution, the level- [figure omitted; refer to PDF] tests conducted are given in Table 1.
Table 1
Alternative hypothesis | Rejection criterion |
[figure omitted; refer to PDF] : [figure omitted; refer to PDF] | [figure omitted; refer to PDF] or [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] : [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] : [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
3. Approximate Test for the Reciprocal of a Normal Mean with a Known Coefficient of Variation
In this section, we present an approximate test using the simple approximate expression for the expectation and variance of [figure omitted; refer to PDF] To find a simple approximate expression, we use a Taylor series expansion of [figure omitted; refer to PDF] around [figure omitted; refer to PDF] : [figure omitted; refer to PDF]
Theorem 3.
Let [figure omitted; refer to PDF] be a random sample of size [figure omitted; refer to PDF] from a normal distribution with mean [figure omitted; refer to PDF] and variance [figure omitted; refer to PDF] The estimator of [figure omitted; refer to PDF] is [figure omitted; refer to PDF] where [figure omitted; refer to PDF] The approximate expectation and variance of [figure omitted; refer to PDF] when a coefficient of variation [figure omitted; refer to PDF] is known are, respectively, [figure omitted; refer to PDF]
Proof of Theorem 3.
Consider random variable [figure omitted; refer to PDF] where [figure omitted; refer to PDF] has support [figure omitted; refer to PDF] Let [figure omitted; refer to PDF] find approximations for [figure omitted; refer to PDF] and [figure omitted; refer to PDF] using Taylor series expansion of [figure omitted; refer to PDF] around [figure omitted; refer to PDF] as in (5). The mean of [figure omitted; refer to PDF] can be found by applying the expectation operator to the individual terms (ignoring all terms higher than two), [figure omitted; refer to PDF]
An approximation of the variance of [figure omitted; refer to PDF] is obtained using the first-order terms of the Taylor series expansion: [figure omitted; refer to PDF]
It is clear from (7) that [figure omitted; refer to PDF] is asymptotically unbiased [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , where [figure omitted; refer to PDF] Thus, the unbiased estimator of [figure omitted; refer to PDF] is [figure omitted; refer to PDF] From (8), [figure omitted; refer to PDF] is consistent [figure omitted; refer to PDF] Under [figure omitted; refer to PDF] , we apply the central limit theorem and Theorem 3, [figure omitted; refer to PDF] Based on this we can now conduct the level- [figure omitted; refer to PDF] tests (see Table 2).
Table 2
Alternative hypothesis | Rejection criterion |
[figure omitted; refer to PDF] : [figure omitted; refer to PDF] | [figure omitted; refer to PDF] or [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] : [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] : [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
4. Simulation Results
In this section, we performed simulation experiments to compare the behavior of the two statistical tests in a variety of situations. The first study compared the type I errors of the two statistical tests and checked how well they behave under the nominal level [figure omitted; refer to PDF] The second study compared their corresponding powers. We take [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . We take [figure omitted; refer to PDF] and estimate the type I errors ( [figure omitted; refer to PDF] ) and power ( [figure omitted; refer to PDF] ). The sample sizes are set at [figure omitted; refer to PDF] , and 50. To test the following hypothesis, we set the significance level of [figure omitted; refer to PDF] at 0.05: [figure omitted; refer to PDF] We repeated the above procedure 20,000 times for each setting using the R statistical software [13] and report the empirical type I errors and powers of the tests in Table 3.
Table 3: The empirical type I errors and powers of the asymptotic test and the approximate test.
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||||
Asympt. | Approx. | Asympt. | Approx. | Asympt. | Approx. | Asympt. | Approx. | |||
0.5 | 0.1 | 0.00 | 0.0499 | 0.0499 | 0.0505 | 0.0505 | 0.0497 | 0.0497 | 0.0503 | 0.0503 |
0.03 | 0.1348 | 0.1348 | 0.2456 | 0.2456 | 0.3402 | 0.3402 | 0.5378 | 0.5378 | ||
0.05 | 0.3057 | 0.3058 | 0.5537 | 0.5538 | 0.7417 | 0.7417 | 0.9282 | 0.9282 | ||
0.10 | 0.8326 | 0.8327 | 0.9891 | 0.9891 | 0.9992 | 0.9992 | 1.0000 | 1.0000 | ||
0.2 | 0.00 | 0.0492 | 0.0492 | 0.0499 | 0.0500 | 0.0486 | 0.0487 | 0.0514 | 0.0514 | |
0.03 | 0.0613 | 0.0614 | 0.0833 | 0.0833 | 0.1106 | 0.1106 | 0.1702 | 0.1702 | ||
0.05 | 0.0925 | 0.0927 | 0.1684 | 0.1684 | 0.2329 | 0.2329 | 0.3751 | 0.3751 | ||
0.10 | 0.2654 | 0.2658 | 0.5151 | 0.5153 | 0.7081 | 0.7081 | 0.9090 | 0.9090 | ||
0.5 | 0.00 | 0.0538 | 0.0531 | 0.0504 | 0.0502 | 0.0525 | 0.0527 | 0.0504 | 0.0504 | |
0.03 | 0.0450 | 0.0449 | 0.0484 | 0.0489 | 0.0490 | 0.0493 | 0.0562 | 0.0562 | ||
0.05 | 0.0418 | 0.0423 | 0.0507 | 0.0509 | 0.0595 | 0.0598 | 0.0821 | 0.0823 | ||
0.10 | 0.0493 | 0.0503 | 0.0855 | 0.0866 | 0.1177 | 0.1182 | 0.2038 | 0.2041 | ||
| ||||||||||
1 | 0.1 | 0.00 | 0.0503 | 0.0503 | 0.0489 | 0.0489 | 0.0501 | 0.0501 | 0.0472 | 0.0472 |
0.03 | 0.1336 | 0.1336 | 0.2418 | 0.2418 | 0.3466 | 0.3466 | 0.5402 | 0.5402 | ||
0.05 | 0.3052 | 0.3052 | 0.5708 | 0.5708 | 0.7502 | 0.7502 | 0.9279 | 0.9279 | ||
0.10 | 0.8319 | 0.8319 | 0.9889 | 0.9889 | 0.9995 | 0.9995 | 1.0000 | 1.0000 | ||
0.2 | 0.00 | 0.0520 | 0.0521 | 0.0505 | 0.0505 | 0.0514 | 0.0514 | 0.0512 | 0.0512 | |
0.03 | 0.0634 | 0.0634 | 0.0872 | 0.0873 | 0.1091 | 0.1091 | 0.1585 | 0.1585 | ||
0.05 | 0.0976 | 0.0978 | 0.1588 | 0.1588 | 0.2332 | 0.2332 | 0.3698 | 0.3698 | ||
0.10 | 0.2521 | 0.2523 | 0.5120 | 0.5121 | 0.7083 | 0.7084 | 0.9067 | 0.9067 | ||
0.5 | 0.00 | 0.0530 | 0.0526 | 0.0520 | 0.0516 | 0.0492 | 0.0491 | 0.0507 | 0.0505 | |
0.03 | 0.0421 | 0.0421 | 0.0454 | 0.0456 | 0.0491 | 0.0493 | 0.0565 | 0.0567 | ||
0.05 | 0.0414 | 0.0422 | 0.0508 | 0.0509 | 0.0609 | 0.0611 | 0.0833 | 0.0835 | ||
0.10 | 0.0488 | 0.0498 | 0.0814 | 0.0819 | 0.1233 | 0.1237 | 0.2084 | 0.2087 | ||
| ||||||||||
5 | 0.1 | 0.00 | 0.0491 | 0.0491 | 0.0474 | 0.0474 | 0.0505 | 0.0505 | 0.0494 | 0.0494 |
0.03 | 0.1398 | 0.1398 | 0.2505 | 0.2505 | 0.3450 | 0.3450 | 0.5327 | 0.5327 | ||
0.05 | 0.3032 | 0.3032 | 0.5600 | 0.5600 | 0.7446 | 0.7446 | 0.9272 | 0.9272 | ||
0.10 | 0.8314 | 0.8315 | 0.9880 | 0.9880 | 0.9994 | 0.9994 | 1.0000 | 1.0000 | ||
0.2 | 0.00 | 0.0472 | 0.0472 | 0.0508 | 0.0508 | 0.0502 | 0.0503 | 0.0519 | 0.0519 | |
0.03 | 0.0620 | 0.0621 | 0.0834 | 0.0834 | 0.1089 | 0.1089 | 0.1601 | 0.1601 | ||
0.05 | 0.0905 | 0.0904 | 0.1655 | 0.1656 | 0.2335 | 0.2336 | 0.3731 | 0.3731 | ||
0.10 | 0.2612 | 0.2613 | 0.5055 | 0.5057 | 0.7035 | 0.7035 | 0.9106 | 0.9106 | ||
0.5 | 0.00 | 0.0512 | 0.0504 | 0.0538 | 0.0535 | 0.0512 | 0.0512 | 0.0532 | 0.0531 | |
0.03 | 0.0470 | 0.0467 | 0.0455 | 0.0458 | 0.0509 | 0.0511 | 0.0564 | 0.0565 | ||
0.05 | 0.0414 | 0.0419 | 0.0504 | 0.0503 | 0.0623 | 0.0626 | 0.0808 | 0.0808 | ||
0.10 | 0.0454 | 0.0466 | 0.0846 | 0.0851 | 0.1250 | 0.1255 | 0.2048 | 0.2051 |
As can be seen from Table 3, the empirical type I errors of both statistical tests were close to the given nominal level and were able to control the probability of type I errors for all situations. In addition, the empirical type I errors of the approximate test were not significantly different from those of the asymptotic test for all scenarios. Regarding the power comparisons, we observed that there was no difference in the empirical powers of the two statistical tests. The powers of both the asymptotic test and the approximate test decreased as [figure omitted; refer to PDF] increased due to the increased variability in the data. Additionally, the empirical powers increased as the sample sizes got larger. However, the empirical powers did not increase or decrease according to the values of [figure omitted; refer to PDF] when [figure omitted; refer to PDF] and [figure omitted; refer to PDF] However, the approximate test was much easier to calculate compared to the asymptotic test because the latter was based on an infinite summation.
5. Conclusion
In this paper, we presented two statistical tests for the reciprocal of a normal population mean with a known coefficient of variation. This situation usually arises when the coefficient of variation of the control group is known. The asymptotic test was based on the expectation and variance of the estimator of the reciprocal of a normal mean. The approximate expectation and variance of the estimator by Taylor series expansion were used to develop the approximate test. The simulation study indicated that the approximate test performs as efficiently as the asymptotic test in terms of empirical type I errors and empirical power. However, the computation of the approximate test was less complicated than the asymptotic test.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
[1] E. Lamanna, G. Romano, C. Sgarbi, "Curvature measurements in nuclear emulsions," Nuclear Instruments & Methods in Physics Research , vol. 187, no. 2-3, pp. 387-391, 1981.
[2] A. Zaman, "Estimators without moments: the case of the reciprocal of a normal mean," Journal of Econometrics , vol. 15, no. 2, pp. 289-298, 1981.
[3] A. Zaman, "Admissibility of the maximum likelihood estimate of the reciprocal of a normal mean with a class of zero-one loss functions," Sankhya , vol. 47, no. 2, pp. 239-246, 1985.
[4] C. S. Withers, S. Nadarajah, "Estimators for the inverse powers of a normal mean," Journal of Statistical Planning and Inference , vol. 143, no. 2, pp. 441-455, 2013.
[5] K. Bhat, K. A. Rao, "On tests for a normal mean with known coefficient of variation," International Statistical Review , vol. 75, no. 2, pp. 170-182, 2007.
[6] V. Brazauskas, J. Ghorai, "Estimating the common parameter of normal models with known coefficients of variation: a sensitivity study of asymptotically efficient estimators," Journal of Statistical Computation and Simulation , vol. 77, no. 8, pp. 663-681, 2007.
[7] D. T. Searls, "A note on the use of an approximately known coefficient of variation," The American Statistician , vol. 21, no. 3, pp. 20-21, 1967.
[8] R. A. Khan, "A note on estimating the mean of a normal distribution with known coefficient of variation," Journal of the American Statistical Association , vol. 63, no. 323, pp. 1039-1041, 1968.
[9] A. T. Arnholt, J. L. Hebert, "Estimating the mean with known coefficient of variation," The American Statistician , vol. 49, no. 4, pp. 367-369, 1995.
[10] W. Srisodaphol, N. Tongmol, "Improved estimators of the mean of a normal distribution with a known coefficient of variation," Journal of Probability and Statistics , vol. 2012, 2012.
[11] A. Wongkhao, S. Niwitpong, S. Niwitpong, "Confidence interval for the inverse of a normal mean with a known coefficient of variation," International Journal of Mathematical, Computational, Statistical, Natural and Physical Engineering , vol. 7, no. 9, pp. 877-880, 2013.
[12] S. Weerahandi, "Generalized confidence intervals," Journal of the American Statistical Association , vol. 88, no. 423, pp. 899-905, 1993.
[13] R. Ihaka, R. Gentleman, "R: a language for data analysis and graphics," Journal of Computational and Graphical Statistics , vol. 5, no. 3, pp. 299-314, 1996.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2015 Wararit Panichkitkosolkul. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
An asymptotic test and an approximate test for the reciprocal of a normal mean with a known coefficient of variation were proposed in this paper. The asymptotic test was based on the expectation and variance of the estimator of the reciprocal of a normal mean. The approximate test used the approximate expectation and variance of the estimator by Taylor series expansion. A Monte Carlo simulation study was conducted to compare the performance of the two statistical tests. Simulation results showed that the two proposed tests performed well in terms of empirical type I errors and power. Nevertheless, the approximate test was easier to compute than the asymptotic test.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer