University of Minnesota
Software Engineering Center
/

You are here

Critical Systems Research Group

The Critical Systems Research Group’s (CriSys) research interests are in the general area of software engineering; in particular, software development for critical software applications — applications where incorrect operation of the software could lead to loss of life, substantial material or environmental damage, or large monetary losses. The long-term goal of our research activities is the development of a comprehensive framework for the development of software for critical software systems. Our work has focused on some of the most difficult and least understood aspects of software development—requirements specification and validation/verification.

Recent Publications

Representation of Confidence in Assurance Cases using the Beta Distribution

Assurance cases are used to document an argument that a system---such as a critical software system---satisfies some desirable property (e.g., safety, security, or reliability). Demonstrating high confidence that the claims made based on an assurance case can be trusted is crucial to the success of the case. Researchers have proposed quantification of confidence as a Baconian probability ratio of eliminated concerns about the assurance case to the total number of identified concerns.

Executing Model-based Tests on Platform-specific Implementations

Model-based testing of embedded real-time systems is challenging because platform-specific details are often abstracted away to make the models amenable to various analyses. Testing an implementation to expose non-conformance to such a model requires reconciling differences arising from these abstractions. Due to stateful behavior, naive comparisons of model and system behaviors often fail causing numerous false positives.

Automated Oracle Data Selection Support

The choice of test oracle—the artifact that determines whether an application under test executes correctly—can significantly impact the effectiveness of the testing process. However, despite the prevalence of tools that support test input selection, little work exists for supporting oracle creation. We propose a method of supporting test oracle creation that automatically selects the oracle data—the set of variables monitored during testing—for expected value test oracles.

Pages