University of Minnesota
Software Engineering Center

You are here

Mats Heimdahl

Photo of Mats Heimdahl
Computer Science and Engineering Department Head
Phone Number: 
Office Location: 
Kenneth H Keller Hall room 6-201

M.S. Computer Science and Engineering from the Royal Institute of Technology, Sweden, 1988.

Ph.D. Computer Science, University of California at Irvine, 1994.


Professor Mats Heimdahl specializes in software engineering and safety critical systems. He is the director of the University of Minnesota Software Engineering Center (UMSEC).

Heimdahl is the recipient of the National Science Foundation's CAREER award, a McKnight Land-Grant Professorship and the McKnight Presidential Fellow award at the University of Minnesota, and the University of Minnesota Award for Outstanding Contributions to Post-Baccalaureate, Graduate, and Professional Education.


Software is increasingly involved in our lives; software controls physical systems ranging from microwave ovens and watches to nuclear power plants, aircraft, and cars. Computer-related failures can, in many of these applications, have catastrophic effects.

My research group, the Critical Systems Research Group (CriSys), is conducting research in software engineering and is investigating methods and tools to help us develop software with predictable behavior free from defects.

Research in this area spans all aspects of system development ranging from concept formation and requirements specification, through design and implementation, to testing and maintenance. In particular, we are currently investigating model-based software development for critical systems.

Specifically, we are focusing on how to use various static verification techniques to assure that software requirements models possess desirable properties, how to correctly generate production code from software requirements models, how to validate models, and how to effectively use the models in the testing process.


Software engineering and safety critical systems.

Recent Publications

Representation of Confidence in Assurance Cases using the Beta Distribution

Assurance cases are used to document an argument that a system---such as a critical software system---satisfies some desirable property (e.g., safety, security, or reliability). Demonstrating high confidence that the claims made based on an assurance case can be trusted is crucial to the success of the case. Researchers have proposed quantification of confidence as a Baconian probability ratio of eliminated concerns about the assurance case to the total number of identified concerns.

Executing Model-based Tests on Platform-specific Implementations

Model-based testing of embedded real-time systems is challenging because platform-specific details are often abstracted away to make the models amenable to various analyses. Testing an implementation to expose non-conformance to such a model requires reconciling differences arising from these abstractions. Due to stateful behavior, naive comparisons of model and system behaviors often fail causing numerous false positives.

Efficient Observability-based Test Generation by Dynamic Symbolic Execution

Structural coverage metrics have been widely used to measure test suite adequacy as well as to generate test cases. In previous investigations, we have found that the fault-finding effectiveness of tests satisfying structural coverage criteria is highly dependent on program syntax – even if the faulty code is exercised, its effect may not be observable at the output. To address these problems, observability-based coverage metrics have been defined.