Summary A Bayesian reliability growth model is presented which includes special features designed to reproduce special properties of the growth in reliability of an item of computer software (program). The model treats the situation where the program is sufficiently complete to work for continuous time periods between failures, and gives a repair rule for the action of the programmer at such failures. Analysis is based entirely upon the length of the periods of working between repairs and failures, and does not attempt to take account of the internal structure of the program. Methods of inference about the parameters of the model are discussed.
This is the unspecified version of the paper.This version of the publication may differ from the final published version. Permanent repository link: AbstractDesign diversity has been used for many years now as a means of achieving a degree of fault tolerance in software-based systems. Whilst there is clear evidence that the approach can be expected to deliver some increase in reliability compared with a single version, there is not agreement about the extent of this. More importantly, it remains difficult to evaluate exactly how reliable a particular diverse faulttolerant system is. This difficulty arises because assumptions of independence of failures between different versions have been shown not to be tenable: assessment of the actual level of dependence present is therefore needed, and this is hard. In this tutorial we survey the modelling issues here, with an emphasis upon the impact these have upon the problem of assessing the reliability of fault tolerant systems. The intended audience is one of designers, assessors and project managers with only a basic knowledge of probabilities, as well as reliability experts without detailed knowledge of software, who seek an introduction to the probabilistic issues in decisions about design diversity.
This is the unspecified version of the paper.This version of the publication may differ from the final published version. Permanent repository link AbstractModern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software.
Ideally, a measure of the security of a system should capture quantitatively the intuitive notion of 'the ability of the system to resist attack'. That is, it should be operational, reflecting the degree to which the system can be expected to remain free of security breaches under particular conditions of operation (including attack). Instead, current security levels at best merely reflect the extensiveness of safeguards introduced during the design and development of a system. Whilst we might expect a system developed to a higher level than another to exhibit 'more secure behaviour' in operation, this cannot be guaranteed; more particularly, we cannot infer what the actual security behaviour will be from knowledge of such a level. In the paper we discuss similarities between reliability and security with the intention of working towards measures of 'operational security' similar to those that we have for reliability of systems. Very informally, these measures could involve expressions such as the rate of occurrence of security breaches (cf rate of occurrence of failures in reliability), or the probability that a specified 'mission' can be accomplished without a security breach (cf reliability function). This new approach is based on the analogy between system failure and security breach. A number of other analogies to support this view are introduced. We examine this duality critically, and have identified a number of important open questions that need to be answered before this quantitative approach can be taken further. The work described here is therefore somewhat tentative, and one of our major intentions is to invite discussion about the plausibility and feasibility of this new approach.
This is the unspecified version of the paper.This version of the publication may differ from the final published version. Copyright and reuse: City Research Online aims to make research outputs of City, University of London available to a wider audience. Copyright and Moral Rights remain with the author(s) and/or copyright holders. URLs from City Research Online may be freely distributed and linked to. City Research Online: http://openaccess.city.ac.uk/ publications@city.ac.uk Permanent repository link: City Research OnlineReasoning about the Reliability Of Diverse Two-Channel Systems In which One Channel is "Possibly Perfect" Bev LittlewoodCentre for Software Reliability City University, London EC1V 0HB, UK John RushbyComputer Science Laboratory SRI International, Menlo Park CA 94025, USA April 26, 2011Abstract This paper refines and extends an earlier one by the first author [17]. It considers the problem of reasoning about the reliability of fault-tolerant systems with two "channels" (i.e., components) of which one, A, because it is conventionally engineered and presumed to contain faults, supports only a claim of reliability, while the other, B, by virtue of extreme simplicity and extensive analysis, supports a plausible claim of "perfection."We begin with the case where either channel can bring the system to a safe state. The reasoning about system probability of failure on demand (pfd ) is divided into two steps. The first concerns aleatory uncertainty about (i) whether channel A will fail on a randomly selected demand and (ii) whether channel B is imperfect. It is shown that, conditional upon knowing p A (the probability that A fails on a randomly selected demand) and p B (the probability that channel B is imperfect), a conservative bound on the probability that the system fails on a randomly selected demand is simply p A × p B . That is, there is conditional independence between the events "A fails" and "B is imperfect." The second step of the reasoning involves epistemic uncertainty represented by assessors' beliefs about the distribution of (p A , p B ) and it is here that dependence may arise. However, we show that under quite plausible assumptions, a conservative bound on system pfd can be constructed from point estimates for just three parameters. We discuss the feasibility of establishing credible estimates for these parameters.We extend our analysis from faults of omission to those of commission, and then combine these to yield an analysis for monitored architectures of a kind proposed for aircraft. BackgroundThis paper is about the assessment of dependability for fault tolerant software-based systems that employ two design-diverse channels. This type of system architecture is mainly used to provide protection against design faults in safety-critical applications. The paper clarifies and greatly extends a note by one of the authors from a few years ago on a specific instance of the topic [17].The use of intellectual diversity to improve the dependability of processes is ubiquitous in human activ...
In recent work we have argued for a formal treatment of confidence about the claims made in dependability cases for software-based systems. The key idea underlying this work is 'the inevitability of uncertainty': it is rarely possible to assert that a claim about safety or reliability is true with certainty. Much of this uncertainty is epistemic in nature, so it seems inevitable that expert judgment will continue to play an important role in dependability cases. Here we consider a simple case where an expert makes a claim about the probability of failure on demand (pfd) of a sub-system of a wider system, and is able to express his confidence about that claim probabilistically. An important, but difficult, problem then is how such sub-system (claim, confidence) pairs can be propagated through a dependability case for a wider system, of which the sub-systems are components. An informal way forward is to justify, at high confidence, a strong claim, and then conservatively only claim something much weaker: "I'm 99% confident that the pfd is less than 10 -5 , so it's reasonable to be 100% confident that it is less than 10 -3 ." These conservative pfds of sub-systems can then be propagated simply through the dependability case of the wider system. In this paper we provide formal support for such reasoning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.