University of Minnesota
Software Engineering Center

You are here

Critical Systems Research Group

The Critical Systems Research Group’s (CriSys) research interests are in the general area of software engineering; in particular, software development for critical software applications — applications where incorrect operation of the software could lead to loss of life, substantial material or environmental damage, or large monetary losses. The long-term goal of our research activities is the development of a comprehensive framework for the development of software for critical software systems. Our work has focused on some of the most difficult and least understood aspects of software development—requirements specification and validation/verification.

Recent Publications

A Hierarchical Requirements Reference Model

Requirements reference models provide a conceptual framework for discussing and reasoning about system development artifacts such as requirements, assumptions and designs. In practice, complex systems are naturally constructed in hierarchies in which design choices at one level of abstraction influence the requirements that flow down to the subsequent lower levels. Well-known reference models such as Parnas’ Four-variable model, Jackson’s World-Machine model, and the WRSPM model of Gunter et al., can be seen as capturing on view of this decomposition—the one in which the solution (software

Improving the Accuracy of Oracle Verdicts Through Automated Model Steering

The oracle—a judge of the correctness of the system under test (SUT)—is a major component of the testing process. Specifying test oracles is challenging for some domains, such as real-time embedded systems, where small changes in timing or sensory input may cause large behavioral differences. Models of such systems, often built for analysis and simulation, are appealing for reuse as oracles. These models, however, typically represent an idealized system, abstracting away certain issues such as non-deterministic timing behavior and sensor noise.

Reasoning about Confidence and Uncertainty in Assurance Cases: A Survey

Assurance cases are structured logical arguments supported by evidence that explain how systems, possibly software systems, satisfy desirable properties for safety, security or reliability. The confidence in both the logical reasoning and the underlying evidence is a factor that must be considered carefully when evaluating an assurance case; the developers must have confidence in their case before the system is delivered and the assurance case reviewer, such as a regulatory body, must have adequate confidence in the case before approving the system for use.