1. Introduction
With the continuous progress of technology, consumers have increasingly higher requirements for product quality. In order to meet market demand, we need to conduct reliability and life tests to evaluate the performance and durability of the product. For instance, Zhang et al. [1] conducted a reliability analysis on the copula-based partially accelerated competition risk model. Alotaibi et al. [2] analyzed the constant stress accelerated life test of XLindley distribution. However, when conducting these experiments, we often face various limitations such as time, cost, and experimental conditions. These limitations prevent us from fully observing the complete lifespan of all products, and there may be some products that fail before or during testing, making it impossible to continue with life testing. To more accurately estimate the lifetime of products for reliability assessment, decision-making, or product improvement purposes, censoring samples are used in lifetime testing. Considering factors such as long product lifetimes and high testing costs, it is necessary to adopt more efficient and cost-effective testing methods. Therefore, the concept of progressive censoring sampling is introduced to further improve the accuracy of reliability and lifetime distribution estimation for products. Compared to traditional censoring methods, the progressive censoring method demonstrates enhanced flexibility in product life testing because it gradually adjusts the censorship samples to better align with the distribution characteristics of product lifespan. This approach improves both testing efficiency and accuracy by reducing data loss caused by premature sample removal and retaining a larger number of longer-lived samples, thus enhancing data utilization and obtaining more precise lifetime estimates. Additionally, the progressive censoring method allows for reduced testing time, smaller sample sizes, lower testing costs while enabling more reliable risk assessment based on accurate lifespan estimation. As a result, it provides a more scientific basis for product design and decision-making while comprehensively evaluating long-term performance. In recent research, significant progress has been made in the field of statistical analysis of progressive censoring experiments. For instance, Chakraborty et al. [3] conducted a comprehensive analysis of the joint cumulative entropy under progressive Type-II censoring (PC-II) samples. Alsadat et al. [4] investigated the parameter estimation problem for unit semi-logical geometric distribution under progressive Type-II right censoring samples. Lone et al. [5] studied a stress strength reliability model based on a balanced joint progressive censoring scheme and performed parameter estimation and reliability analysis on Burr XII type distribution samples using classical and Bayesian methods, confirming the effectiveness of the adopted approach. Additionally, Alsadat et al. [6] analyzed the properties of Kumaraswamy’s modified inverse Weibull distribution in samples with progressive first failure censoring. Berred and Stepanov [7] examined the distribution characteristics and asymptotic behavior of exponential intervals under PC-II samples. Moreover, Alotaibi et al. [8] conducted research on Frechet distribution prediction based on review data in fields such as medicine and technical sciences. These research findings demonstrate continuous advancements and enhancements in statistical analysis methods for progressive censoring experiments. Progressive censoring testing can be divided into progressive Type-I censoring testing and PC-II testing, with a focus on PC-II testing in this paper. Based on the concept proposed by Alotaibi [8], PC-II testing can be described as follows:
If there are n products undergoing lifetime testing, and the time of observing the first failed product is recorded as X1:m:n, R1 products need to be excluded out of the remaining n−1 products that have not failed. For the second failed product, the same procedure should be replicated, observed as X2:m:n, by removing R2 products from the remaining n−2−R1 non-failed products, by extension, the time at which the occurrence of the m-th faulty product is observed is recorded as Xm:m:n. At this stage, the experiment concludes, and any remaining n−m−R1−R2−⋯−Rm products are automatically removed.
Abouammoh and Alshingiti [9] proposed the generalized inverse exponential distribution (GIED), a novel distribution type, which combines the generalized exponential distribution (GED) with the inverse exponential distribution (IED) and is an extended form of the IED. This distribution is widely used in survival analysis, reliability engineering, and communication fields [10]. Compared with the exponential distribution and IED, GIED introduces additional parameters, making it more flexible and able to better adapt to various data situations and perform fitting. Therefore, in recent years, the statistical properties of this distribution have been extensively examined by numerous scholars. For example, Bakoban and Aldahlan [11] studied Bayesian estimation of shape parameters for GIED under complete samples. Hassan et al. [12] assumed that strength and stress follow GIED with different shape parameters as random variables under ranked set sampling (RSS) and simple random sampling (SRS), and analyzed the reliability estimation of stress intensity. Liu and Xi [13] studied the Bayesian estimation of GIED parameters under timed censoring samples. Below we provide the definition of GIED:
Let X denote a stochastic variable which follows the GIED. The probability density function (PDF) and cumulative distribution function (CDF) of this distribution are given by:(1)(2)
Here β is the shape parameter, and λ is the scale parameter.
From Figs 1 and 2, it can be observed that when λ is fixed, the PDF and CDF exhibit different trends with varying β. When β = 1, the GIED is IED.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Although GIED, as an emerging distribution, has shown great potential in fields such as survival analysis and reliability engineering and has attracted the attention of many scholars, research on statistical inference of GIED in PC-II samples is still relatively scarce. In view of this, this article aims to address the shortcomings in this area and promote the application of GIED in censoring data scenarios. This article combines PC-II samples with the GIED model to achieve accurate estimation of GIED parameters through parameter estimation and hypothesis testing while verifying the applicability of the model. Entropy, an important concept in information theory, also plays a significant role in practical applications. It can be used to evaluate the uncertainty and risk level of random variables, thereby assisting in risk assessment and decision-making [14]. Additionally, entropy value can serve as a criterion for selecting distribution models where higher entropy values indicate stronger uncertainty and randomness within data. Based on this understanding, appropriate adjustments are made to model selection criteria. This article delves into estimating entropy within the GIED model while further validating its applicability through parameter estimation and hypothesis testing.
This study is based on PC-II samples and primarily explores the estimation of entropy in GIED. In Section 1, the basic concepts and properties of PC-II experiments and the GIED model are introduced. In Section 2, the terms for Shannon entropy and Rényi entropy in GIED are deduced and proved. In Section 3, we derived maximum likelihood (ML) estimates for Shannon entropy and Rényi entropy, and confidence intervals (CIs) for Shannon entropy and Rényi entropy are constructed using the bootstrap method. Section 4 uses a gamma distribution as a prior, and Bayesian inference for the estimation of Shannon entropy and Rényi entropy under the Linex loss function (LLF), entropy loss function (ELF), and DeGroot loss function (DLF) is performed. The Lindley approximation algorithm is applied to compute the Bayesian estimator (BE). In Section 5, numerical results are obtained through Monte Carlo simulations to validate the effectiveness of the proposed estimation methods. In Section 6, the constructed model is applied to real data, and the validity of the employed estimation approaches is verified. Finally, in Section 7, a thorough discussion of the research results is provided, and corresponding conclusions are drawn.
2. Entropy under GIED
Shannon entropy, introduced by Shannon [15], it is a fundamental concept in information theory that serves to gauge the degree of uncertainty or stochasticity in information. The definition of Shannon entropy is given below:(3)where f(x) represents the PDF of the stochastic variable X that is continuous. With the development of information theory, the concept of Shannon entropy has been widely applied in various fields such as communication, data mining and machine learning, and statistical physics. Flores-Gallegos and Flores-Gómez [16] revealed the relationship between Shannon entropy and chemical hardness and applied the derived equations to molecular ensembles. Joshi [17] studied the variations in Shannon entropy under changes in constraint potential parameters and Debye screening parameters. Flores-Gallegos [18] analyzed the trends in the first derivative of Shannon entropy with respect to electron number and spin density.
Rényi entropy is an extended form of information entropy proposed by Rényi in 1960 [19]. Unlike Shannon entropy, Rényi entropy introduces a parameter α, which can adjust the properties of entropy to some extent. The definition of Rényi entropy is given below:(4)where f(x) represents the PDF of the stochastic variable X that is continuous. Rényi entropy is an extension of entropy in information theory and plays an important role in various fields such as information theory, statistics, and complex networks. Significant progress has been made in the study of Rényi entropy in existing literature. Chennaf and Ben Amor [20] analyze the mathematical properties of Rényi entropy and partial Rényi entropy, applying them to measure the uncertainty of uncertain random variables. They also apply partial Rényi entropy to optimize the selection of uncertain random returns in finance. Tian and Xu [21] discussed the problem of calculating Rényi entropy in AdS (3)/(B) CFT2. Kayid and Shrahili [22] used the signature of the system to determine the Rényi entropy of the past longevity of a interrelated system, in order to evaluate its predictability.
Theorem 1. Let X denote a stochastic variable which follows the GIED. The Shannon entropy of GIED is given by:(5)where Γ(.) denotes the gamma function, and ψ(.) denotes the digamma function.
Proof. See S1 Appendix.
Theorem 2. Let X denote a stochastic variable which follows the GIED. The Rényi entropy of GIED is given by:(6)where Γ(.) denotes the gamma function.
Proof. See S2 Appendix.
3. ML estimation
Suppose represents m PC-II samples observed out of n test samples in total, following the GIED defined by Eq (1), according to Abo-Kasem [23], the likelihood function (LF) is:(7)where , Xi:m:n represents the observed values of the Xi:m:n samples, .
According to Eq (7), the logarithm of the LF is obtained as:
Therefore, Eqs (8) and (9) are obtained:(8)(9)
By setting the right-hand side of Eqs (8) and (9) to zero and solving these two equations, we obtain the roots, which are the ML estimates of β and λ. Moreover, the solutions obtained from Eqs (8) and (9) exist and are unique. However, proving directly that solutions to the nonlinear Eqs (8) and (9) exist and are unique can prove challenging. Therefore, we can provide evidence through visualizing the logarithm likelihood equation, as depicted in the real data analysis presented in Section 6 of Fig 3. Since we cannot obtain an analytical solution directly for Eqs (8) and (9), we can consider using numerical methods to obtain the ML estimates of β and λ. In the following, we will introduce the algorithm steps of the dichotomy method.
Step 1: Given an error capacity ε, determine the interval range [λL, λU] such that f(λL)⋅f(λU)<0.
Step 2: Find the point λM in the middle of the range [λL, λU] and substitute it into f(λ) to calculate f(λM).
Step 3: If f(λM) = 0, then , the algorithm ends; If f(λL)⋅f(λM)<0, then λU = λM; If f(λU)⋅f(λM)<0, then λL = λM.
Step 4: Repeat step 3 until |λL−λU|<ε, and the algorithm ends.
[Figure omitted. See PDF.]
Because of the invariance property of ML estimation, we substitute the parameter estimates and obtained from the dichotomy method into Eqs (5) and (6), we get:(10)(11)
In statistics, CIs are used to describe the range of uncertainty in parameter estimation results, representing the possible range of true values of parameters at a given confidence level. There are various methods for constructing CIs, and choosing the appropriate method mainly depends on the estimated parameter type and sample distribution. The bootstrap method is a resampling technique in statistics used to estimate the sampling distribution of statistics, construct CIs, and perform hypothesis testing. It is generated by repeatedly sampling with replacement from the original sample to simulate multiple virtual sample sets, thereby avoiding assumptions about the population distribution. By conducting repeated sampling and parameter estimation on a virtual sample set, the sampling distribution of parameter estimation can be obtained, and then CIs can be constructed. Abundant research has been conducted on the bootstrap method in the existing literature. Kanwal and Abbas [24] investigated parameter estimation of the Frechet distributed process capability index and established a corresponding bootstrap CI. Li et al. [25] proposed a novel bootstrap method for small sample hydrologic frequency analysis, demonstrating its superior accuracy compared to traditional methods, particularly with limited sample sizes. Hwang et al. [26] effectively addressed uncertainty in landslide probability analysis caused by insufficient data using the bootstrap method and combined it with point estimation to propose a new approach. Su et al. [27] introduced an improved bootstrap method for estimating fatigue properties of materials and components under small sample sizes. Dudorova et al. [28] employed a bootstrap analysis method to study preferences of Egyptian fruit bat pups, suggesting that these preferences be transcribed based on behavioral test data presented therein. These studies provide valuable application cases and theoretical foundations for diverse fields and applications of the bootstrap method. Maiti et al. [29] estimated the Shannon entropy and Rényi entropy of GED using progressive censoring data, and constructed CIs for entropy using bootstrap method.
In this section, we utilize the bootstrap-t (Boot-t) method to construct CIs for the GIED’s Shannon entropy and Rényi entropy [30]. The specific algorithm is presented in Table 1.
[Figure omitted. See PDF.]
Provide the definition:
Therefore, the 100(1−θ)% Boot-t CIs for Shannon entropy and Rényi entropy are and , respectively. The variance of the entropy can be obtained from the Fisher matrix. Next, we will calculate the variance of the entropy:
Construct the Fisher matrix:where:
We can obtain the inverse matrix of the Fisher matrix, denoted as . Since taking derivatives directly with respect to β and λ in Eqs (5) and (6) can be quite complicated, we rewrite Eqs (5) and (6) as follows, it should be noted that, for the sake of simplicity, we make .
(12)(13)
Therefore, from Eqs (12) and (13), we have:
Thereforewhere represents the transpose of TS and represents the transpose of TR. Definition .
4. Bayesian estimation of entropy
Bayesian estimation is a parameter estimation method based on Bayes’ theorem in statistics [31]. It utilizes prior information and sample data to obtain a posterior probability distribution by updating the prior probability distribution, and infers parameter estimates and uncertainties from it. Bayesian estimation, as a common statistical inference method, has the advantage of flexibly combining prior knowledge and observation data to provide more accurate inference results. It is widely used in many fields to deal with uncertainty, model selection, and prediction, such as statistics, machine learning, image and signal processing, financial risk assessment, and other fields [32–34]. In this section, we propose a posterior density function for GIED based on PC-II samples, and use Bayesian estimation to obtain the estimation results of Shannon entropy and Rényi entropy under three loss functions: LLF, ELF, and DLF.
4.1. Conditional posterior distribution
Based on the principle of Bayesian estimation, selecting an appropriate prior distribution is the core step of Bayesian estimation, which allows for the fusion of previous knowledge, experience, or data into parameter inference, thereby providing more accurate, reliable, and comprehensive parameter estimation results. In this section, we use information priors for inference. The selection of information priors aims to utilize previous knowledge, experience, or data to assist in parameter inference in the Bayesian estimation process. Compared to non-informative prior, informative prior has more guidance and accuracy, thereby improving the reliability and accuracy of parameter es-timation results [35]. In this study, we adopt the gamma distribution as the prior distribution due to its flexibility and conjugate nature, which allows us to obtain a simplified posterior distribution when combined with the LF. This simplification greatly facilitates the process of Bayesian estimation. Assuming β and λ are independent random variables following Γ(η1,γ1) and Γ(η2,γ2) distributions, respectively, the density functions of β and λ can be represented as follows:
The joint prior on β and λ is given by:
By applying Bayesian theorem, the joint posterior density of β and λ can be determined as:where .
Bayesian estimation is a parameter estimation method based on Bayes’ theorem. In the decision-making process, we often need to consider the risks and losses associated with different decisions. Introducing a loss function can help quantify the risks associated with different decisions during the estimation process to find the optimal decision. In this section, we introduce three loss functions: LLF, ELF, and DLF. LLF is an asymmetric loss function proposed, which combines the characteristics of exponential loss and linear loss and allows balancing the response to prediction errors based on parameter adjustments. ELF is a loss function used for classification problems, based on the concept of information entropy, to measure the difference between model predictions and true values. DLF is a common asymmetric loss function proposed by DeGroot [36], often used in Bayesian decision theory to compare different decision strategies. Compared to LLF and ELF, DLF computation is relatively simple and easier to implement. Therefore, we choose three loss functions, LLF, ELF and DLF, to comprehensively consider factors such as prediction error, information content and practicality, and hope to find the most suitable loss function for GIED model entropy estimation through comparative analysis. The BEs of these different loss functions is provided below [37–39] (See Table 2).
[Figure omitted. See PDF.]
Here, H represents the entropy function, and represents the estimates of H, c represents the hyperparameter of LLF.
(1) Under the LLF, the BEs of Shannon entropy and Rényi entropy are as follows:(14)(15)
(2) Under the ELF, the BEs of Shannon entropy and Rényi entropy are as follows:(16)(17)
(3) Under the DLF, the BEs of Shannon entropy and Rényi entropy are as follows:(18)(19)
Indeed, the aforementioned BEs are in the form of a ratio of double integrals, and they are not in explicit form, making it difficult to compute the results directly. Therefore, we can employ the Lindley approximation algorithm to compute the BEs of Shannon entropy and Rényi entropy.
4.2. Lindley approximation
The Lindley approximation algorithm is a Bayesian statistical inference approximation method proposed by Lindley, used to calculate estimates of parameters. According to Abo-Kasem [23], we provide the Lindley approximate equation:(20)where φ(β,λ) is a function of β and λ. ρ(β,λ) is the logarithm of the joint prior distribution of β and λ, that is . l(β,λ|x) is the logarithm of the LF. And and are the ML estimates of β and λ, and the subscripts denote the partial derivatives of the variables, such as, φβ is the first-order derivative of β in φ(β,λ). Similarly, the others are denoted as follows:
Based on the given equations, the following representation holds for Eqs (14)–(19):
(1) Computing the BE of Shannon entropy under LLF:
In that case, we have , thus
Substitute the above equations into Eq (20) to obtain , and then the BE of Shannon entropy under LLF can be obtained from Eq (14).
(2) Computing the BE of Rényi entropy under LLF:
In that case, we have , thus
Substitute the above equations into Eq (20) to obtain , and then the BE of Rényi entropy under LLF can be obtained from Eq (15).
(3) Computing the BE of Shannon entropy under ELF:
In that case, we have ,thus
Substitute the above equations into Eq (20) to obtain , and then the BE of Shannon entropy under ELF can be obtained from Eq (16).
(4) Computing the BE of Rényi entropy under ELF:
In that case, we have , thus
Substitute the above equations into Eq (20) to obtain , and then the BE of Rényi entropy under ELF can be obtained from Eq (17).
(5) Computing the BE of Shannon entropy under DLF:
In that case, we have , letting , ϕ2 = HS, thus
Substitute the above equations into Eq (20) to obtain . Next, we calculate ϕ2 = HS.
Substitute the above equations into Eq (20) to obtain E[HS|x], and then the BE of Shannon entropy under DLF can be obtained from Eq (18).
(6) Computing the BE of Rényi entropy under DLF:
In that case, we have , letting , ϕ4 = HR, thus
Substitute the above equations into Eq (20) to obtain . Next, we calculate ϕ4 = HR.
Substitute the above equations into Eq (20) to obtain E[HR|x], and then the BE of Rényi entropy under DLF can be obtained from Eq (19).
5. Monte Carlo modeling
Monte Carlo simulation is a statistical simulation method based on random sampling, which generates a large number of random samples and simulates and infers based on these samples to obtain corresponding approximate results. In this section, we use the Monte Carlo method combined with the estimation method used in this paper to calculate the average estimates (AEs) and corresponding mean square errors (MSEs) of Shannon entropy and Rényi entropy. For further analysis, we used the bootstrap method to obtain the average width (AW) and coverage probability (CP) of the entropy CI. Firstly, we set the true values of the parameters β = 0.5, λ = 0.5, and the censoring schemes (see Table 3). Based on the given censoring schemes, we generate PC-II data using an algorithm (see Wang and Gui [40]) to calculate the AEs and MSEs of the GIED model parameters (see Tables 4 and 5). On this basis, we assume hyperparameters a1 = b1 = a2 = b2 = 1, entropy parameters c = 2, and α = 1.5 and conduct 1000 repeated experiments with different sample sizes to obtain ML estimates and Bayesian estimates of Shannon entropy and Rényi entropy, as well as corresponding MSEs (see Tables 6 and 7). For simplicity, we denote the ML estimates as MLE, and the Bayesian estimates under the three loss functions as LBe, EBe, and DBe. When constructing CIs using the bootstrap method, we set confidence levels θ = 0.05,0.1, and sampled the original data with replacement. The number of samples was set to m1 = 30,m2 = 50,m3 = 80, and the number of repeated samples was B = 5000. Tables 8 and 9 represent the AW and CP of Shannon entropy and Rényi entropy at 100(1−θ)% CIs, respectively. Through these analyses, we can gain a deeper understanding of the accuracy and reliability of entropy estimation.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Drawing upon the data presented in the aforementioned tables, the following research findings can be deduced:
(1) In parameter estimation and entropy estimation, Bayesian estimation performs better than ML estimation on the whole. Specifically, for parameter estimation, Bayesian estimation under LLF has the best performance. For entropy estimation, the Bayesian estimation under DLF has the best performance.
(2) When the total sample observations were fixed, the MSE of parameter estimation and entropy estimation showed a downward trend with the increase of the observed sample size, indicating that the accuracy of estimation increased with the increase of sample size.
(3) Through the analysis of Tables 8 and 9, it can be observed that AW of the CI of Shannon entropy and Rényi entropy gradually decreases with the increase of the observed sample size, while AW increases correspondingly with the increase of the confidence level. At the same time, CP of Shannon entropy and Rényi entropy increases with the increase of confidence level, especially when θ = 0.1, CP reaches the highest value.
6. Analyzing the data
Here, the methods of estimation used with this paper are demonstrated using actual data. The data set is as follows: 1.05, 2.92, 3.61, 4.20, 4.49, 6.72, 7.31, 9.08, 9.11, 14.49, 16.85, 18.82, 26.59, 30.26, 41.34. These data represent the survival time (in months) of Hodgkin’s disease patients undergoing intensive treatment with nitrogen mustard. Please refer to Bakoban and Abubaker [41] for more details. In order to assess the suitability of the GIED model for this dataset, we computed the Kolmogorov-Smirnov (KS) statistic for GIED, inverse Weibull distribution (IWD), exponential Weibull distribution (EWD), and Weibull distribution (WD) based on this dataset. The P-value derived from the KS statistic was utilized as a criterion to determine the optimal model selection. The specific values are presented in Table 10. Subsequently, Fig 3 illustrates both the empirical distribution of the dataset and cumulative distribution functions corresponding to each distribution model. It is evident from Table 10 and Fig 3 that the GIED model provides a reasonable fit for this dataset.
[Figure omitted. See PDF.]
To validate the performance of the proposed estimation method, and considering the validity of the data and the diversity of censoring schemes, we randomly selected m = 7 observations from the provided real dataset. The selection of m = 7 as the observation value is based on the optimization consideration of model performance. Through simulation experiments, we found that this value can maintain reasonable computational efficiency while ensuring prediction accuracy, and can better adapt to the characteristics and requirements of our dataset. Subsequently, according to the censoring schemes defined in Table 3, these observations were subjected to corresponding censoring processing to generate a more stable sample dataset, thereby providing a basis for subsequent statistical analysis, as detailed in Table 11. To demonstrate the solution of the ML estimation exists and is unique, we chose censoring scheme II and visualized the log-LF. Please refer to Fig 4 for details. Table 12 shows the ML estimates and the Bayesian estimates of the entropies on the basis of the actual data, with hyperparameters set as a1 = b1 = a2 = b2 = 1,c = 2, and α = 1.5. Table 13 presents the upper and lower bounds of the bootstrap CIs for entropy on the basis of the actual data at different confidence levels, with the number of repeated samples set to 5000.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
7. Conclusions
Within this paper, we discussed the estimation of entropy for the GIED’s Shannon entropy and Rényi entropy based on PC-II samples. We first introduced the PC-II experiment and the GIED model, and deduced the ML estimation expressions for entropy. Due to the invariance property of ML estimation, we used a dichotomy method to obtain the ML estimates of the parameters. Next, or the purpose of evaluating the accuracy and precision of the estimates of Shannon entropy and Rényi entropy, we used the bootstrap method to obtain the CIs of Shannon entropy and Rényi entropy. In Bayesian estimation, we introduced three loss functions, LLF, ELF, and DLF, to assess the disparities between the approximated values and actual values, helping us assess the performance of the model and make optimal choices. However, due to the complexity of the Bayesian estimation forms for entropy under the loss functions, direct computation was challenging. Therefore, we used the Lindley approximation algorithm to estimate their Bayesian estimates. Finally, we conducted simulation experiments using the Monte Carlo method to obtain the estimates and corresponding MSEs of Shannon entropy and Rényi entropy, and analyzed and compared the performance of different estimation methods used under the censoring schemes.
The analysis of GIED entropy estimation in PC-II samples can help us gain a deeper understanding and describe the uncertainty and information requirements of GIED in such sample scenarios. These measures provide an evaluation of the distribution characteristics, enabling a more comprehensive understanding of the information content and statistical features of censoring sample data in analysis and modeling. Furthermore, with the obtained entropy values, we can better comprehend and analyze the data characteristics in PC-II samples and make model selection, parameter estimation, and predictive analysis in relevant applications. This enhances our overall understanding and interpretability of the data, improving our ability to comprehend and interpret the data comprehensively. Furthermore, the estimation of entropy in distribution models plays a crucial role in evaluating product reliability. A low entropy value indicates a high level of stability in the product’s lifespan, thereby reflecting its superior reliability. Conversely, a high entropy value suggests the need for optimization and improvement of the product. Entropy is also applicable to assess the risk of product failure since higher entropy signifies increased uncertainty regarding the product’s lifespan and consequently elevates the risk of failure. Based on this understanding, targeted risk control strategies can be developed to minimize potential risks.
Supporting information
S1 Appendix. Prove Theorem 1.
https://doi.org/10.1371/journal.pone.0311129.s001
(DOCX)
S2 Appendix. Prove Theorem 2.
https://doi.org/10.1371/journal.pone.0311129.s002
(DOCX)
References
1. 1. Zhang CF, Wang L, Bai XC, Huang JA. Bayesian reliability analysis for copula based step-stress partially accelerated dependent competing risks model. Reliab Eng Syst Safe. 2022; 227: 108718.
* View Article
* Google Scholar
2. 2. Alotaibi R, Nassar M, Elshahhat A. Reliability estimation under normal operating conditions for progressively type-II XLindley censored data. Axioms. 2023; 12: 352.
* View Article
* Google Scholar
3. 3. Chakraborty S, Bhattacharya R, Pradhan B. Cumulative entropy of progressively Type-II censored order statistics and associated optimal life testing-plans. Statistics. 2023; 57: 161–174.
* View Article
* Google Scholar
4. 4. Alsadat N, Ramadan DA, Almetwally EM, Tolba AH. Estimation of some lifetime parameter of the unit half logistic-geometry distribution under progressively Type-II censored data. J Radiat Res Appl Sc. 2023; 16: 100674.
* View Article
* Google Scholar
5. 5. Lone SA, Panahi H, Anwar S, Shahab S. Inference of reliability model with burr type XII distribution under two sample balanced progressive censored samples. Phy Scripta. 2024; 99: 025019.
* View Article
* Google Scholar
6. 6. Alsadat N, Abu-Moussa M, Sharawy A. On the study of the recurrence relations and characterizations based on progressive first-failure censoring. Aims Math. 2024; 9: 481–494.
* View Article
* Google Scholar
7. 7. Berred A, Stepanov A. Asymptotic properties of lower exponential spacings under Type-II progressive censoring. Commun Stat-Theor M. 2022; 51: 4841–4853.
* View Article
* Google Scholar
8. 8. Alotaibi R, AL-Dayian GR, Almetwally EM, Rezk H. Bayesian and non-Bayesian two-sample prediction for the Fréchet distribution under progressive type II censoring. Aip Adv. 2024; 14: 015137.
* View Article
* Google Scholar
9. 9. Abouammoh AM, Alshingiti AM. Reliability estimation of generalized inverted exponential distribution. J Stat Comput Sim. 2009; 79: 1301–1315.
* View Article
* Google Scholar
10. 10. Alqallaf FA, Kundu D. A bivariate inverse generalized exponential distribution and its applications in dependent competing risks model. Commun Stat-Simul C. 2022; 51: 7019–7036.
* View Article
* Google Scholar
11. 11. Bakoban RA, Aldahlan MA. Bayesian approximation techniques for the generalized inverted exponential distribution. Intell Autom Soft Co. 2022; 31: 129–142.
* View Article
* Google Scholar
12. 12. Hassan AS, Alsadat N, Elgarhy M, Chesneau C, Nagy HF. Analysis of R = P[Y<X<Z] using ranked set sampling for a generalized inverse exponential model. Axioms. 2023; 12: 302.
* View Article
* Google Scholar
13. 13. Liu H, Xi CX. Bayesian estimation of parameters for the generalized inverse exponential distribution under timed censoring samples. Stat Decis. 2023; 12: 302.
* View Article
* Google Scholar
14. 14. Petropoulos C, Patra LK, Kumar S. Improved estimators of the entropy in scale mixture of exponential distributions. Braz J Probab Stat. 2020; 34: 580–593.
* View Article
* Google Scholar
15. 15. Shannon CE. A mathematical theory of communication. Bell Labs Tech J. 1948; 27: 379–423.
* View Article
* Google Scholar
16. 16. Flores-Gallegos N, Flores-Gómez L. An approach to chemical hardness through Shannon’s entropy. J Math Chem. 2023; 61: 1726–1738.
* View Article
* Google Scholar
17. 17. Joshi R. Shannon entropy for endohedrally confined hydrogen atom embedded in Debye plasma. Eur Phys J Plus. 2023; 138: 760.
* View Article
* Google Scholar
18. 18. Flores-Gallegos N. On the information obtained using Shannon’s entropy through spin density. J Math Chem. 2023; 61: 1532–1544.
* View Article
* Google Scholar
19. 19. Rényi A. On measures of information and entropy. Berkeley Symp. on Math. Statist. and Prob. 1960; 547–561.
* View Article
* Google Scholar
20. 20. Chennaf S, Ben Amor J. Rényi entropy of uncertain random variables and its application to portfolio selection. Soft Comput. 2023; 27: 11569–11585.
* View Article
* Google Scholar
21. 21. Tian J, Xu XG. Negative Renyi entropy and brane intersection. J High Energy Phys. 2023; 2023: 142.
* View Article
* Google Scholar
22. 22. Kayid M, Shrahili M. Rényi entropy for past lifetime distributions with application in inactive coherent systems. Symmetry. 2023; 15: 1310.
* View Article
* Google Scholar
23. 23. Abo-Kasem OE, El Saeed AR, El Sayed AI. Optimal sampling and statistical inferences for Kumaraswamy distribution under progressive Type-II censoring schemes. Sci Rep-UK. 2023; 13: 12063. pmid:37495654
* View Article
* PubMed/NCBI
* Google Scholar
24. 24. Kanwal T, Abbas K. Bootstrap confidence intervals of process capability indices spmk, spmkc and cs for frechet distribution. Qual Reliab Eng Int. 2023; 39: 2244–2257.
* View Article
* Google Scholar
25. 25. Li HL, Qian LX, Yang JH, Dang SZ, Hong M. Parameter estimation for univariate hydrological distribution using improved bootstrap with small samples. Water Resour Manag. 2023; 37: 1055–1082.
* View Article
* Google Scholar
26. 26. Hwang IT, Park HJ, Lee JH. Probabilistic analysis of rainfall-induced shallow landslide susceptibility using a physically based model and the bootstrap method. Landslides. 2023; 20: 829–844.
* View Article
* Google Scholar
27. 27. Su WH, Liu XT, Xu S, Wang JB, Wang ZQ. Uncertainty for fatigue life of low carbon alloy steel based on im-proved bootstrap method. Fatigue Fract Eng M. 2023; 46: 3858–3871.
* View Article
* Google Scholar
28. 28. Dudorova AV, Podgrudkov DA, Kruchenkova EP. Bootstrap method application in behavioral data analysis and the data visualization method of choice from three objects. Zool ZH. 2023; 102: 349–353.
* View Article
* Google Scholar
29. 29. Maiti K, Kayal S, Kundu D. Statistical inference on the Shannon and Rényi entropy measures of generalized exponential distribution under the progressive censoring. SN Comput Sci. 2022; 3: 317.
* View Article
* Google Scholar
30. 30. Yousef MM, Fayomi A, Almetwally EM. Simulation techniques for strength component partially accelerated to analyze stress-strength model. Symmetry-Basel. 2023; 15: 1183.
* View Article
* Google Scholar
31. 31. Habeeb SB, Abdullah FK, Shalan RN, Hassan AS, Almetwally EM, Alghamdi FM, et al. Comparison of some Bayesian estimation methods for type-I generalized extreme value distribution with simulation. Alex Eng J. 2024; 98: 356–363.
* View Article
* Google Scholar
32. 32. Wei D, Zhan PD. Bayesian estimation for the random moderation model: effect size, coverage, power of test, and type-І error. Front Psychol. 2023; 14: 1048842. pmid:37465494
* View Article
* PubMed/NCBI
* Google Scholar
33. 33. Thach NN. Applying Monte Carlo simulations to a small data analysis of a case of economic growth in COVID-19 times. Sage Open. 2023; 13: 21582440231181540. pmid:37362768
* View Article
* PubMed/NCBI
* Google Scholar
34. 34. Iqbal A, Shad MY, Yassen MF. Empirical E-Bayesian estimation of hierarchical poisson and gamma model using scaled squared error loss function. Alex Eng J. 2023; 69: 289–301.
* View Article
* Google Scholar
35. 35. Patra LK, Kayal S, Kumar S. Measuring Uncertainty Under Prior Information. IEEE T Inform Theory. 2020; 66: 2570–2580.
* View Article
* Google Scholar
36. 36. DeGroot MH. Optimal statistical decision. New York: McGraw-Hill; 2005.
37. 37. Al-Bossly A. E-Bayesian and Bayesian estimation for the Lomax distribution under weighted composite linex loss function. Comput Intel Neurosc. 2023; 2021: 2101972. pmid:34931123
* View Article
* PubMed/NCBI
* Google Scholar
38. 38. Han M. A note on the posterior risk of the entropy loss function. Appl Math Model. 2023; 117: 705–713.
* View Article
* Google Scholar
39. 39. Kazmi SMA, Aslam M. Bayesian estimation for 3-component mixture of generalized exponential distribution. Iran J Sci Technol Trans A. 2019; 43: 1761–1788.
* View Article
* Google Scholar
40. 40. Wang XJ, Gui WH. Bayesian estimation of entropy for Burr Type XII distribution under progressive Type-II censored data. Mathematics. 2021; 9: 313.
* View Article
* Google Scholar
41. 41. Bakoban RA, Abubaker MI. On the estimation of the generalized inverted Rayleigh distribution with real data ap-plications. Int J Electron Commun Comput Eng. 2015; 6: 502–508.
* View Article
* Google Scholar
Citation: Gong Q, Yin B (2024) Statistical inference of entropy functions of generalized inverse exponential model under progressive type-II censoring test. PLoS ONE 19(9): e0311129. https://doi.org/10.1371/journal.pone.0311129
About the Authors:
Qin Gong
Roles: Conceptualization, Data curation, Formal analysis, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing
Affiliation: College of Science, Jiangxi University of Science and Technology, Ganzhou, China
Bin Yin
Roles: Funding acquisition, Investigation, Project administration, Resources, Supervision, Validation, Writing – review & editing
E-mail: [email protected]
Affiliation: Teaching Department of Basic Subjects, Jiangxi University of Science and Technology, Nanchang, China
ORICD: https://orcid.org/0000-0003-4928-8634
[/RAW_REF_TEXT]
1. Zhang CF, Wang L, Bai XC, Huang JA. Bayesian reliability analysis for copula based step-stress partially accelerated dependent competing risks model. Reliab Eng Syst Safe. 2022; 227: 108718.
2. Alotaibi R, Nassar M, Elshahhat A. Reliability estimation under normal operating conditions for progressively type-II XLindley censored data. Axioms. 2023; 12: 352.
3. Chakraborty S, Bhattacharya R, Pradhan B. Cumulative entropy of progressively Type-II censored order statistics and associated optimal life testing-plans. Statistics. 2023; 57: 161–174.
4. Alsadat N, Ramadan DA, Almetwally EM, Tolba AH. Estimation of some lifetime parameter of the unit half logistic-geometry distribution under progressively Type-II censored data. J Radiat Res Appl Sc. 2023; 16: 100674.
5. Lone SA, Panahi H, Anwar S, Shahab S. Inference of reliability model with burr type XII distribution under two sample balanced progressive censored samples. Phy Scripta. 2024; 99: 025019.
6. Alsadat N, Abu-Moussa M, Sharawy A. On the study of the recurrence relations and characterizations based on progressive first-failure censoring. Aims Math. 2024; 9: 481–494.
7. Berred A, Stepanov A. Asymptotic properties of lower exponential spacings under Type-II progressive censoring. Commun Stat-Theor M. 2022; 51: 4841–4853.
8. Alotaibi R, AL-Dayian GR, Almetwally EM, Rezk H. Bayesian and non-Bayesian two-sample prediction for the Fréchet distribution under progressive type II censoring. Aip Adv. 2024; 14: 015137.
9. Abouammoh AM, Alshingiti AM. Reliability estimation of generalized inverted exponential distribution. J Stat Comput Sim. 2009; 79: 1301–1315.
10. Alqallaf FA, Kundu D. A bivariate inverse generalized exponential distribution and its applications in dependent competing risks model. Commun Stat-Simul C. 2022; 51: 7019–7036.
11. Bakoban RA, Aldahlan MA. Bayesian approximation techniques for the generalized inverted exponential distribution. Intell Autom Soft Co. 2022; 31: 129–142.
12. Hassan AS, Alsadat N, Elgarhy M, Chesneau C, Nagy HF. Analysis of R = P[Y<X<Z] using ranked set sampling for a generalized inverse exponential model. Axioms. 2023; 12: 302.
13. Liu H, Xi CX. Bayesian estimation of parameters for the generalized inverse exponential distribution under timed censoring samples. Stat Decis. 2023; 12: 302.
14. Petropoulos C, Patra LK, Kumar S. Improved estimators of the entropy in scale mixture of exponential distributions. Braz J Probab Stat. 2020; 34: 580–593.
15. Shannon CE. A mathematical theory of communication. Bell Labs Tech J. 1948; 27: 379–423.
16. Flores-Gallegos N, Flores-Gómez L. An approach to chemical hardness through Shannon’s entropy. J Math Chem. 2023; 61: 1726–1738.
17. Joshi R. Shannon entropy for endohedrally confined hydrogen atom embedded in Debye plasma. Eur Phys J Plus. 2023; 138: 760.
18. Flores-Gallegos N. On the information obtained using Shannon’s entropy through spin density. J Math Chem. 2023; 61: 1532–1544.
19. Rényi A. On measures of information and entropy. Berkeley Symp. on Math. Statist. and Prob. 1960; 547–561.
20. Chennaf S, Ben Amor J. Rényi entropy of uncertain random variables and its application to portfolio selection. Soft Comput. 2023; 27: 11569–11585.
21. Tian J, Xu XG. Negative Renyi entropy and brane intersection. J High Energy Phys. 2023; 2023: 142.
22. Kayid M, Shrahili M. Rényi entropy for past lifetime distributions with application in inactive coherent systems. Symmetry. 2023; 15: 1310.
23. Abo-Kasem OE, El Saeed AR, El Sayed AI. Optimal sampling and statistical inferences for Kumaraswamy distribution under progressive Type-II censoring schemes. Sci Rep-UK. 2023; 13: 12063. pmid:37495654
24. Kanwal T, Abbas K. Bootstrap confidence intervals of process capability indices spmk, spmkc and cs for frechet distribution. Qual Reliab Eng Int. 2023; 39: 2244–2257.
25. Li HL, Qian LX, Yang JH, Dang SZ, Hong M. Parameter estimation for univariate hydrological distribution using improved bootstrap with small samples. Water Resour Manag. 2023; 37: 1055–1082.
26. Hwang IT, Park HJ, Lee JH. Probabilistic analysis of rainfall-induced shallow landslide susceptibility using a physically based model and the bootstrap method. Landslides. 2023; 20: 829–844.
27. Su WH, Liu XT, Xu S, Wang JB, Wang ZQ. Uncertainty for fatigue life of low carbon alloy steel based on im-proved bootstrap method. Fatigue Fract Eng M. 2023; 46: 3858–3871.
28. Dudorova AV, Podgrudkov DA, Kruchenkova EP. Bootstrap method application in behavioral data analysis and the data visualization method of choice from three objects. Zool ZH. 2023; 102: 349–353.
29. Maiti K, Kayal S, Kundu D. Statistical inference on the Shannon and Rényi entropy measures of generalized exponential distribution under the progressive censoring. SN Comput Sci. 2022; 3: 317.
30. Yousef MM, Fayomi A, Almetwally EM. Simulation techniques for strength component partially accelerated to analyze stress-strength model. Symmetry-Basel. 2023; 15: 1183.
31. Habeeb SB, Abdullah FK, Shalan RN, Hassan AS, Almetwally EM, Alghamdi FM, et al. Comparison of some Bayesian estimation methods for type-I generalized extreme value distribution with simulation. Alex Eng J. 2024; 98: 356–363.
32. Wei D, Zhan PD. Bayesian estimation for the random moderation model: effect size, coverage, power of test, and type-І error. Front Psychol. 2023; 14: 1048842. pmid:37465494
33. Thach NN. Applying Monte Carlo simulations to a small data analysis of a case of economic growth in COVID-19 times. Sage Open. 2023; 13: 21582440231181540. pmid:37362768
34. Iqbal A, Shad MY, Yassen MF. Empirical E-Bayesian estimation of hierarchical poisson and gamma model using scaled squared error loss function. Alex Eng J. 2023; 69: 289–301.
35. Patra LK, Kayal S, Kumar S. Measuring Uncertainty Under Prior Information. IEEE T Inform Theory. 2020; 66: 2570–2580.
36. DeGroot MH. Optimal statistical decision. New York: McGraw-Hill; 2005.
37. Al-Bossly A. E-Bayesian and Bayesian estimation for the Lomax distribution under weighted composite linex loss function. Comput Intel Neurosc. 2023; 2021: 2101972. pmid:34931123
38. Han M. A note on the posterior risk of the entropy loss function. Appl Math Model. 2023; 117: 705–713.
39. Kazmi SMA, Aslam M. Bayesian estimation for 3-component mixture of generalized exponential distribution. Iran J Sci Technol Trans A. 2019; 43: 1761–1788.
40. Wang XJ, Gui WH. Bayesian estimation of entropy for Burr Type XII distribution under progressive Type-II censored data. Mathematics. 2021; 9: 313.
41. Bakoban RA, Abubaker MI. On the estimation of the generalized inverted Rayleigh distribution with real data ap-plications. Int J Electron Commun Comput Eng. 2015; 6: 502–508.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 Gong, Yin. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
This article explores the estimation of Shannon entropy and Rényi entropy based on the generalized inverse exponential distribution under the condition of stepwise Type II truncated samples. Firstly, we analyze the maximum likelihood estimation and interval estimation of Shannon entropy and Rényi entropy for the generalized inverse exponential distribution. In this process, we use the bootstrap method to construct confidence intervals for Shannon entropy and Rényi entropy. Next, we select the gamma distribution as the prior distribution and apply the Lindley approximation algorithm to calculate `estimates of Shannon entropy and Rényi entropy under different loss functions including Linex loss function, entropy loss function, and DeGroot loss function respectively. Afterwards, simulation is used to calculate estimates and corresponding mean square errors of Shannon entropy and Rényi entropy in GIED model. The research results show that under DeGroot loss function, estimation accuracy of Shannon entropy and Rényi entropy for generalized inverse exponential distribution is relatively high, overall Bayesian estimation performs better than maximum likelihood estimation. Finally, we demonstrate effectiveness of our estimation method in practical applications using a set of real data.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer