Content area
Software reliability and cybersecurity are critical to system integrity. Security violations in defense systems are an especially grave threat to national security and the focus of significant resources. Major defense acquisition programs (MDAP) are those that meet or exceed Acquisition Category One (ACAT I), which is determined by a cost estimate in excess of hundreds of millions of dollars. Inadequate cybersecurity has contributed to at least one MDAP declaring a Nunn-McCurdy Breach, which requires (i) Congress be notified when the cost per unit increases more than 25% beyond what was originally estimated and (ii) program termination for cost growth greater than 50%. Achieving cybersecurity cost effectively is therefore critical to the national defense and economic well-being of the United States. This paper presents an open source tool to support the quantitative assessment of software reliability and cybersecurity as well as the underlying mathematical theory and algorithmic details. The tool enables assessment of a system's security from penetration testing data and can be used to estimate the number of vulnerabilities remaining within the software as well as the additional penetration testing required to reduce the rate of vulnerability exploitation to a desired level with a specified level of confidence. This top down approach can be applied to systems such as vehicles as well as information systems, including those that must safeguard defense facilities, their contractors, and other government buildings. This approach will enable organizations that acquire software to establish quantitative requirements that can be included in contracts, providing clear thresholds for software and system developers to achieve. The tool will enable contractors to regularly assess the security of their software with respect to requirements, thereby facilitating the identification and reporting of programs that may fail to achieve contractually specified security objectives. This regular assessment and reporting will enable closer collaboration between government agencies and contractors to ensure that systems achieve a desired level of security to reduce the risk of cost and schedule overruns that would otherwise threaten deployment of secure systems.
Abstract: Software reliability and cybersecurity are critical to system integrity. Security violations in defense systems are an especially grave threat to national security and the focus of significant resources. Major defense acquisition programs (MDAP) are those that meet or exceed Acquisition Category One (ACAT I), which is determined by a cost estimate in excess of hundreds of millions of dollars. Inadequate cybersecurity has contributed to at least one MDAP declaring a Nunn-McCurdy Breach, which requires (i) Congress be notified when the cost per unit increases more than 25% beyond what was originally estimated and (ii) program termination for cost growth greater than 50%. Achieving cybersecurity cost effectively is therefore critical to the national defense and economic well-being of the United States. This paper presents an open source tool to support the quantitative assessment of software reliability and cybersecurity as well as the underlying mathematical theory and algorithmic details. The tool enables assessment of a system's security from penetration testing data and can be used to estimate the number of vulnerabilities remaining within the software as well as the additional penetration testing required to reduce the rate of vulnerability exploitation to a desired level with a specified level of confidence. This top down approach can be applied to systems such as vehicles as well as information systems, including those that must safeguard defense facilities, their contractors, and other government buildings. This approach will enable organizations that acquire software to establish quantitative requirements that can be included in contracts, providing clear thresholds for software and system developers to achieve. The tool will enable contractors to regularly assess the security of their software with respect to requirements, thereby facilitating the identification and reporting of programs that may fail to achieve contractually specified security objectives. This regular assessment and reporting will enable closer collaboration between government agencies and contractors to ensure that systems achieve a desired level of security to reduce the risk of cost and schedule overruns that would otherwise threaten deployment of secure systems.
Keywords: cyber security, software reliability, quantitative assessment, open source, defense acquisition
(ProQuest: ... denotes formulae omitted.)
1. Introduction
With the ever increasing dependency on technology and proliferation of software systems and services, security continues to gain attention as a top concern because computer systems and networks must rely on the security of software running on them. Network security is critical, as information leaks and access to other confidential resources could compromise national security and economic stability. Therefore, it is important to assess the reliability and security of software before released to predict the number of faults and vulnerabilities that may be encountered during field operations. Such a systematic approach is necessary to minimize the number and severity of loop holes that would allow an attacker to interfere with normal system function or enable access to data or even control.
Several studies have analyzed internet security trends and provided recommendations on how to improve security (Howard, 1997) through an investigation of security related incidents reported in databases such as the CERT (Computer Emergency Readiness Team) (CERT, 2016). A similar study specific to the security of defense organizations was conducted by the GAO (Government Accountability Office) (1996). Several studies to model vulnerability have also been performed. For example, vulnerability exploitation models (VEMs) (Browne et al., 2001) were first introduced approximately 15 years ago, in which the number of computer security exploits reported in CERT are modeled to analyze the security exploits. Vulnerability discovery models were proposed to consider the vulnerability discovery process as a function of time, including their ability to predict future rates of exploitation (Alhazmi et al., 2005). The majority of these vulnerability discovery models assume that the number of vulnerabilities discovered follows a non-homogeneous Poisson process (NHPP), which has also been used for decades to model the discovery of faults during software testing. Vulnerability models that have been proposed include the Anderson Thermodynamic model (Anderson, 2002), Alhazmi-Malaiya logistic model (Alhazmi et al., 2005), two trend models examined by Rescola (Rescola, 2005), namely the Rescola Exponential and Rescola linear models, and an effort based vulnerability discovery model (Alhazmi et al., 2005). Alhazmi et al. (2005) also compared the logarithmic Poisson model (Musa, 1984), alternatively known as the MusaOkumoto model, with other vulnerability models. More recently Okamura et al. (2009) extended the vulnerability life cycle model proposed by Cavusoglu et al. (2006, 2008) by applying a non-homogeneous vulnerability discovery process to determine the optimal security patch release time based on cost analysis. Despite this growing body of research on modeling and vulnerability analysis, a major challenge remains, namely relative ease with which users can apply these models in practice to systems in their day to day work even though they often lack sufficient knowledge of models and their underlying mathematics.
This paper describes the Software Failure and Reliability Assessment Tool (SFRAT) (Nagaraju et al., 2016), an open source tool and discusses its applicability to vulnerability assessment. Data collected from the National Vulnerability Database (NVD, 2016) is used to illustrate the concepts. We apply the SFRAT tool to analyze vulnerability data. Trend tests to examine the data for reliability growth and the remainder of the tool workflow are then presented in the context of the inflection S-shaped software reliability growth model (SRGM). This study demonstrates the applicability of the software reliability tool to assess vulnerability data, suggesting that reliability and security researchers share overlapping goals and can benefit from a common platform to perform system level assessments of reliability and security. In doing so, insights from software testing and penetration testing data will enable inferences about software systems that are both quantitative and encompass both reliability and security in an integrated manner.
The remainder of the paper is organized as follows: Section 2 provides an overview of the Software Failure and Reliability Assessment Tool. Section 3 discusses vulnerability modeling and Section 4 presents an overview of the software reliability growth models as well as parameter estimation methods including an algorithmic approach to identify initial parameter estimates. Section 5 applies the tool to vulnerability data. Section 6 offers conclusions and directions for future research.
2. Software failure and reliability assessment tool (SFRAT):
SFRAT (Nagaraju et al., 2016) is an open source tool implemented in the R (CRAN, 2016) statistical programming language and is accessible at http://sasdlc.ore/lab/proiects/srt.html. SFRAT has been designed to enable researchers and practitioners to use and contribute additional model to the tool following a defined process. The Shiny (Shiny, 2016) user interface provides an intuitive work flow that allows users with little knowledge of software reliability engineering to apply the software reliability and security growth modeling in their work. SFRAT can be used to model and predict the reliability and security of software during test and evaluation. It allows users to answer practical questions during testing such as:
* Is the software ready to release (has it achieved specified reliability and security goals)?
* Flow much more time or testing effort would be required to achieve specified goals?
* What would the consequences to the system's operational reliability and security be if insufficient testing resources are allocated?
SFRAT can be used on computers running Windows, Mac OS X, or Linux. The work flow is divided across four tabs to analyze software failure or penetration testing data, apply models, query model results, and assess model rankings based on goodness-of-fit measures. All four tabs provide plots or tables to display results in an intuitive format. Options to save plots and data for easy inclusion in reports are also provided.
Unlike software failure data used to assess reliability, vulnerability data collected from penetration testing or during operation enables the following inferences about software security:
* Prediction of the number of additional vulnerabilities that would be detected if the software were allowed to operate for a specific period of time.
* Testing time required to achieve a desired level of software security by exposing and patching vulnerabilities to prevent them from being exploited.
* Probability of exploit free operation for a specified period of time in a specified environment.
As noted in Section 3, the process of vulnerability discovery is similar to fault exposure in software reliability. Flence, the SFRAT tool is also suited to assess software security.
2.1 Tab 1: Select, analyze, and filter data
Tab 1 allows the user to upload their software failure or penetration testing data in either .xlsx or .csv format. SFRAT accepts either time, count, or inter-event time data. The tool automatically converts data provided in any one of these three formats to the other two formats so that all models can be applied to the data. This tab provides graphical and tabular views of the data in different forms such as cumulative failures/vulnerabilities, times between failures/vulnerabilities, and failure/vulnerability discovery intensity to help users to understand the data. This tab also performs trend tests including the Laplace trend test (Ascher et al., 1984) and running arithmetic average test to determine if the data exhibits reliability/security growth with a user specified level of confidence.
2.2 Tab 2: Set up and apply models
Tab 2 applies software reliability growth models including a combination of failure rate and failure count models to the data analyzed in Tab 1. The user may specify the number of future failures or vulnerabilities that they would like the models to predict. Failure rate models include the Jelinski-Moranda (Lyu, 1996) and Geometric models (Goel et al., 1979) while failure count models include the Goel-Okumoto (Goel, 1985), Weibull (Yamada et al., 1986), and delayed S-shaped (Yamada et al., 1983) models. All or subset of these models can be applied. Upon completion, model results can be viewed as cumulative failures, times between failures, and failure intensity. A reliability growth curve for the probability of failure free operation fora user specified period of time is also provided. In the context of security, a reliability growth curve can be interpreted as the probability of exploit free operation for a specified period of time in a specified environment. Tab 2 also allows prediction by extending the curves beyond the last data point observed.
2.3 Tab 3: Query model results
Tab 3 uses the model results obtained on Tab 2 to make detailed predictions about future failure to be observed. Tab 3 estimates the time required to observe additional failures or the number of failures which would be detected if the software were tested for a specified amount of additional time. This tab also computes the testing time required to achieve a target reliability if it has not already been achieved.
2.4 Tab 4: Evaluate models
Tab 4 computes goodness-of-fit measures such as the Akaike Information Criterion (AIC) (Akaike, 1974) and predictive sum of squares error (PSSE) (Fiondella et al., 2011) to rank the models so that the user can identify an appropriate model according to its ability to characterize the data well and predict the future accurately. This tab has options to perform goodness of fit assessment on all or a subset of the models applied in Tab 2 along with an option to specify the subset of the data that should be considered to compute PSSE.
3. Vulnerability modeling
In the context of software, a vulnerability is a flaw that could be exploited by an attacker to compromise the system in a variety of ways such as denial of service, degraded performance, or information theft. Vulnerabilities are often introduced by incomplete software requirements as well as software bugs and hence it is very difficult to detect and fix all of them during software testing (Okamura et al., 2012). Fielding software without fixing all the bugs may create loopholes through which attackers can attack the system. When a vulnerability is detected in a system, it typically goes through seven states described in the vulnerability life cycle model (Arbaugh et al., 2000), namely birth, discovery, disclosure, correction, publicity, scripting, and death. Most quantitative modeling is based on event time data corresponding to discovery or disclosure, although some models have also considered severity according to the CVSS score. NFIPP SRGM characterize vulnerability data according to the following assumptions.
* Software contains a finite number of vulnerabilities to be discovered.
* The time to discover a vulnerability is random, and the discovery times of the individual vulnerabilities are mutually independent random variables.
Therefore, the number of vulnerabilities discovered by time t can be modeled as
...
where m is the total number of undiscovered vulnerabilities at time t - 0 and F(t) is the cumulative distribution function of the vulnerability discovery process. Moreover, when the total number of undiscovered vulnerabilities follows a Poisson distribution with mean a, the probability mass function of D{t) is
... (1)
Equation (1) corresponds to the mean value function (MVF) of the non-homogeneous Poisson process and is therefore mathematically similar to NHPP-based software reliability growth models (Goel et al., 1979). Thus, by substituting different statistical distributions for F(t), we can obtain alternative models to characterize the vulnerability discovery processes. Given the large number of existing NHPP-based SRGM, a sensible approach is to use software reliability growth models to quantitatively assess software security.
4. Software reliability growth modeling
This section provides an overview of non-homogeneous Poisson process software reliability growth models and describes parameter estimation techniques, namely maximum likelihood estimation (MLE) and the NewtonRaphson algorithm. Model interpretation and inferences enabled in the context of cybersecurity are also discussed.
4.1 NHPP software reliability growth models
The nonhomogeneous Poisson process is a stochastic process (Ross, 2003) that counts the number of events observed as a function of time. In the context of software reliability, the NHPP counts the number of faults detected by time t. In the context of cybersecurity, the NHPP counts the number of vulnerabilities detected by time t. This counting process is characterized by a mean value function m(t), which can assume a variety of functional forms. Several SRGM can be written in the form m(t) = a x F(t), where a denotes the vulnerabilities that would be detected with indefinite testing and F(t) is the cumulative distribution function of a continuous probability distribution.
For example, the MVF of the inflection S-shaped model (Ohba, 1984) is
... (2)
where b is a constant vulnerability detection rate, a is the number of vulnerabilities, and c is the inflection parameter. The inflection parameter is defined in terms of r as
... (3)
The inflection rate r provides the ratio of the number of detectable vulnerabilities to the total number of vulnerabilities in the system due to masking or other causes. As r approaches 1.0, the inflection S-shaped model reduces to the exponential model (Goel, 1985), which has already been implemented in the SFRAT.
4.2 Maximum likelihood estimation
Maximum likelihood estimation is a procedure to identify the numerical values of the parameters of a model such that the plot of the MVF closely matches the plot of the empirical data. Maximum likelihood estimation maximizes the likelihood function, also known as the joint distribution of the vulnerability discovery data. Commonly, the log-likelihood function is maximized because the monotonicity of the logarithm ensures that the maximum of the log-likelihood function is equivalent to maximizing the likelihood function. The log-likelihood expression of a discovery times data set is
... (4)
where 0 is the vector of model parameters and the instantaneous discovery rate at time t¿, which is defined as
... (5)
The system of equations to maximize is obtained by substituting a mean value function such as Equation (2) into Equation (4), where is determined by Equation (5). After simplification with algebraic identities, a system of equations is obtained by computing partial derivatives with respect to each model parameter and these partial derivatives are equated to zero. The general form of this system of equations is
... (6)
which must then be solved numerically to identify the MLEs that best fit the data set.
5. Illustrations
This section presents a discussion of the vulnerability data considered, analyzes the data using the SFRAT tool described in Section 2, and applies the Inflection S-shaped Software Reliability Growth Model according to the functionality provided by the SFRAT.
5.1 Input data
The vulnerability data set used in this study was compiled from vulnerability disclosure data published in the National Vulnerability Database (NVD, 2016), the US Government's repository of standards-based vulnerability management data maintained by the National Institute of Standards (NIST) and sponsored by the Department of Flomeland Security (DFIS). Figure 1 shows the data extracted from NVD into a comma separated value (CSV) file. The events occurred between 1986-2012 and the data set contains a total of 50,109 entries.
Column 1 of Figure 1 shows all disclosures that have a published CVE (Common Vulnerabilities and Exposures) identifier in the catalog of known security threats. CVE identifiers follow the format "CVE-Year-Unique four digit number" syntax. Column 2 is the publication date, while column 3 is determined by examining the reference URLs in the NVD to find the earliest mention of the vulnerability. Column 4 indicates the severity of the vulnerability according to the standardized platform-independent Common Vulnerability Scoring System (CVSS) (CVSS, 2015) score, with range 0.1-10 indicating vulnerabilities of low to critical severity. Access complexity in Column 5 is a metric that describes how difficult it is to exploit the vulnerability, whereas column 6 lists the affected product such as a Webserver, operating system, or file sharing system. Column 7 lists the number of times the vulnerability was exploited.
For the illustrations, the vulnerability data was grouped into count data based on year, reducing to n = 27 data points. Figure 2 shows a portion of the data formatted for compatibility with the SFRAT.
T FC CFC
In Figure 2, T represents the year, FC is the count of the number of vulnerabilities discovered, and CFC the cumulative number of vulnerabilities discovered. As noted in the previous section, SFRAT automatically converts the failure count data to discovery times and interdiscovery times data formats with the built in data conversion routines implemented in the tool.
5.2 Data view
Upon successful upload of the data file shown in Figure 2, the data can be viewed on Tab 1 in three different forms, including cumulative discoveries, times between discoveries, and discovery intensity.
Figure 3a shows the cumulative vulnerabilities discovered, while Figure 3b shows an alternative view of the data, reporting the times between vulnerability discovery.
Figure 3(a) indicates that there were relatively few vulnerabilities in the first 11 years (1986-1996) but in the subsequent years there was a rapid increase in the number of vulnerabilities discovered with a slight decrease in recent years. One possible explanation for the small number of vulnerabilities recorded in earlier years is limited internet access, which could have limited attackers' opportunities to attempt to exploit systems. In subsequent years, however, software became more easily accessible through the internet. A larger number of software products and insufficient testing without fixing bugs prior to release could also have contributed to systems being more prone to attack. For example, as more organizations migrated their business services and financial data to the web, the incentive to steal this data for monetary gain increased. In recent years, the number of vulnerabilities discovered began to decrease, suggesting that attempts to patch vulnerable software may have been successful in slowing the rate of vulnerability exploitation. Flowever, the last year only recorded five of 12 months. Thus, it may be premature to declare that efforts to ensure security are indeed bearing fruit. These trends exhibited in the vulnerability detection data closely resembles an S-shaped curve, which can be characterized by an S-shaped software reliability growth model known as the Inflection S-shaped model (Ohba 1984).
Figure 3(b) shows the times between vulnerability discovered, indicating large time between vulnerabilities discovered in the first five years when fewer vulnerabilities were detected. As more vulnerabilities were discovered, the time between discovery decreased. A small by imperceptible increase in time between discovery occurs toward the right end of the curve suggesting that the software may have become slightly more secure.
Figure 4 shows the data in a third format, vulnerability discovery intensity.
Figure 4 agrees with the discussion associated with Figures 3a and 3b, indicating a gradually increasing number of vulnerabilities detected in the first 10 years, followed by a rapid increase and then decreasing in the last five years. This latter decrease suggests that security may be improving.
5.3 Trend tests to verify security growth
SFRAT implements the Laplace trend test and running arithmetic average to quantitatively assess if the data exhibits security growth. Security growth indicates the system is improving, while reliability deterioration will be present when the situation is getting worse because failures are occurring more frequently than previously observed. This reliability deterioration occurs in the inflection S-shaped model when the number of events per unit time begins to increase, while security growth can be observed toward the end when the rate of occurrence of events begins to decrease. Figure 5a and 5b shows the Laplace trend test and running arithmetic average of the vulnerability data set respectively.
Figure 5a shows that the vulnerability data does not exhibit much security growth or deterioration in the first two years because very few vulnerabilities were discovered. However, in year three, fewer vulnerabilities were discovered and thus security improved slightly, which is represented by a decrease in the Laplace statistic. After this, the cumulative number of vulnerabilities discovered began to rise, leading to deterioration in security. The numbers on the y-axis are the critical values of a standard normal distribution. Therefore, to achieve 95% confidence in security growth, the curve would need to decrease below -1.64. This suggests that serious improvement is needed to achieve improved security.
Figure 5b shows the running arithmetic average as well as several generalized averages. The lower most plot corresponds to running arithmetic average, whereas the three curves above are fractional exponent transformations of the weight coefficients to more clearly illustrate the security growth trends in year 27, where the running average is increasing because the time between vulnerability discovery is increasing gradually.
5.4 Set up and apply models
Among the five models implemented in SFRAT, no model fit the data well. Since visual inspection suggests that an S-shaped curve would be appropriate, we performed the calculations corresponding to the functionality present in SFRAT by applying the Inflection S-shaped model to the vulnerability data using Mathematica.
Figure 6 shows the observed vulnerability discovery data (solid step function) as well as the number of cumulative failures estimated by the ISS model (dotted line).
Figure 6 shows how the ISS model estimates match the observed data well, unlike the five models in the SFRAT which failed to capture the trend present in the data accurately. This suggests that it may be beneficial to integrate the ISS model into the tool to enable analysis of reliability and security data that are well characterized by an inflection S-shaped curve.
Figures 7a and 7b show the times between vulnerability discovery and vulnerability discovery intensity for the observed data as well as the fitted ISS model.
Figure 8a shows the security growth curve of the vulnerability discovery data as a function of the testing time and a mission time of one day (goal duration to go without a single vulnerability being discovered).
Figure 8a exhibits a decreasing trend in security due to the increase in the number of vulnerabilities discovered. There is a small increase in security toward the right end of the curve. Flowever, it is small and thus imperceptible without magnification. Figure 8b shows a magnified version of the security curve in the last few years. This trend suggests that the probability of vulnerability free operation is increasing slowly, indicating that serious effort will be required to achieve an acceptable level of security.
5.5 Query model results
Some predictions described in Section 2.3 include:
* Time required to observe the next 10,000 vulnerabilities = 5.39 years.
* Number of vulnerability that will be observed over the next 10 years = 4350.19 vulnerabilities.
* Testing time required to achieve a target security of 90% for a time interval of one day = 7.79 years.
6. Conclusion
This paper illustrates the applicability of methods from software reliability growth modeling to assess security of software from vulnerability discovery data according to non-homogeneous Poisson process software reliability growth models. The SFRAT tool was applied to the vulnerability discovery data collected in National vulnerability database between 1986-2012 to analyze the data and observe the trends. Since the vulnerability discovery trend exhibited in the data was closely characterized by the inflection S-shaped software reliability growth models but this model is not yet implemented in the tool, we performed the calculations and analysis in the tool workflow with Mathematica to illustrate the inferences possible.
Future research will decompose the data further to search for additional trends in specific products and systems identified in the NVD. The ISS SRGM will also be incorporated into the tool to enable software reliability and security assessment with this model.
Acknowledgements
This work was supported by (i) the Naval Air Warfare Center (NAVAIR) under Award Number N00421-16-P-0521 and (ii) the National Science Foundation under Grant Number 1526128.
References
Akaike, H. (1974) "A new look at the statistical model identification." IEEE transactions on automatic control, Vol. 19, No. 6, pp. 716-723.
Alhazmi, 0. and Malaiya, Y. (2005) "Modeling the vulnerability discovery process." IEEE International Symposium on Software Reliability Engineering, p. 10.
Alhazmi, 0. and Malaiya, Y. (2005) "Quantitative vulnerability assessment of systems software," Proceedings of Annual Reliability and Maintainability Symposium, pp. 615-620.
Anderson, R. (2002) "Security in Opens versus Closed Systems-The Dance of Boltzmann, Coase and Moore," Open Source Software: Economics, Law and Policy, Toulouse, France, June.
Arbaugh, W., Fithen, W., and McHugh, J. (2000) "Windows of vulnerability: A case study analysis." Computer, Vol. 33, No. 12, pp. 52-59.
Ascher, H. and Feingold, H. (1984) "Repairable Systems Reliability: Modeling, Inferences", Misconceptions and Their Causes: Marcel Dekker, Inc.
Browne, H., Arbaugh, W., McHugh, J., and Fithen, W. (2001) "A trend analysis of exploitations." Proceedings of IEEE Symposium on Security and Privacy, pp. 214-229.
Cavusoglu, H. and J. Zhang, J. (2006) "Economics of security patch management," Workshop on the Economics of Information Security.
Cavusoglu, H. and Zhang, J. (2008) "Security patch management: Share the burden or share the damage?" Management Science, Vol. 54, No. 4, pp. 657-670.
CERT, Vulnerability analysis, "http://www.cert.org/vulnerabilitv-analvsis/". Carnegie Mellon University, Last accessed 2016.
Common Vulnerability Scoring System (CVSS) V3 Development Update, "https://www.first.org/cvss", Last accessed 2016.
Fiondella, L. and Gokhale, S. (2011) "Software reliability model with bathtub shaped fault detection rate," Proceedings of Annual Reliability and Maintainability Symposium, pp. 1-6.
GAO Technical Report (1996) "Information security: Computer attacks at department of defense pose increasing risks," Technical Report GAO/AIMD-96-84, U.S. Government Accounting Office.
Goel, A, and Okumoto, K. (1979) "Time-dependent error-detection rate model for software reliability and other performance measures." IEEE transactions on Reliability, Vol. 3, pp. 206-211.
Goel, A. (1985) "Software reliability models: Assumptions, limitations, and applicability," Proceedings of IEEE Transactions on Software Engineering, Vol. 11, no. 12, pp. 1411-1423.
Howard, J. (1997) "An Analysis Of Security Incidents On The Internet: 1989-1995", PhD thesis, Carnegie-Mellon University, April.
Lyu,M. Ed. (1996) Handbook of Software Reliability Engineering, New York, NY: McGraw-Hill.
Musa, J. and Okumoto, K. (1984) "A logarithmic Poisson execution time model for software reliability measurement," Proceedings of International Conference on Software Engineering, pp. 230-238.
Nagaraju, V., Katipally, K., Muri, R., Wandji, T., and Fiondella, L. (2016) "An Open Source Software Reliability Tool: A Guide for Users". Proceedings of International Conference on Reliability and Quality in Design, pp. 132-137, CA.
National Vulnerability Database, "https://nvd.nist.gov/download.cfm". Last accessed 2016.
Ohba, M. (1984) "Inflection S-shaped software reliability growth model." Stochastic models in reliability theory. Springer Berlin Heidelberg, pp. 144-162, Accessible at "http://link.springer.com/chapter/10.1007%2F978-3-642-45587-2 10".
Okamura, H., Tokuzane, M., and Dohi, T. (2009) "Optimal Security Patch Release Timing Under Non-Flomogeneous Vulnerability-Discovery Processes." Proceedings of International Symposium on Software Reliability Engineering, pp. 120-128.
Okamura, H., Tokuzane, M., and Dohi, T. (2012) "Security Evaluation for Software System with Vulnerability Life Cycle and User Profiles." Workshop on Dependable Transportation Systems/Recent Advances in Software Dependability, pp. 3944.
Rescola, E. (2005) "Is finding security holes a good idea?" Security and Privacy, Jan-Feb, pp. 14-19.
Ross, S. (2003) Introduction to Probability Models, Academic Press: New York, NY, 8th edition.
Shiny by RStudio, A web application framework for R, "http://shinv.rstudio.com/". Last accessed 2016.
The Comprehensive R Archive Network (CRAN), https://cran.r-proiect.org/. Last accessed 2016.
Yamada, S., and Osakl, S. (1983) "Reliability growth models for hardware and software systems based on nonhomogeneous Poisson process: A survey," Microelectronics and Reliability, Vol. 23, No. 1, pp. 91-112.
Yamada, S., Ohtera, H., and Narihisa, H. (1986) "Software reliability growth models with testing-effort," IEEE Transactions on Reliability, Vol. R-35, No. 1, Apr, pp. 19-23.
Vidhyashree Nagaraju1, Lance Fiondella1 and Thierry Wandji2
department of Electrical and Computer Engineering, University of Massachusetts Dartmouth, USA
2Naval Air Systems Command, Patuxent River, USA
vnagaraiu(5)umassd.edu
lfiondella(5)umassd.edu
ketchiozo.wandji(5)navv.mil
Lance Fiondella (PhD) is an assistant professor in the Department of Electrical and Computer Engineering at the University of Massachusetts Dartmouth. He received his PhD (2012) in Computer Science and Engineering from the University of Connecticut. He is an elected member of the Administrative Committee of the IEEE Reliability Society (2015-2017).
Copyright Academic Conferences International Limited 2017