University of Minnesota
Software Engineering Center

You are here

Sanjai Rayadurgam

Photo of Sanjai Rayadurgam
Staff Member
Phone Number: 
Office Location: 
6-202 Keller Hall

Sanjai Rayadurgam is a Research Project Specialist at the University of Minnesota Software Engineering Center. His research interests are in software testing, formal analysis and requirements modeling, with particular focus on safety-critical systems development, where he has significant industrial experience. He earned a B.Sc. in Mathematics from the University of Madras at Chennai, and in Computer Science & Engineering, an M.E. from the Indian Institute of Science at Bangalore and a Ph.D. from the University of Minnesota at Twin Cities. He is a member of IEEE and ACM.

Recent Publications

Domain Modeling for Development Process Simulation

Simulating agile processes prior to adoption can reduce the risk of enacting an ill-fitting process. Agent-based simulation is well-suited to capture the individual decision-making valued in agile. Yet, agile's lightweight nature creates simulation difficulties as agents must fill-in gaps within the specified process. Deliberative agents can do this given a suitable planning domain model. However, no such model, nor guidance for creating one, currently exists.

Representation of Confidence in Assurance Cases using the Beta Distribution

Assurance cases are used to document an argument that a system---such as a critical software system---satisfies some desirable property (e.g., safety, security, or reliability). Demonstrating high confidence that the claims made based on an assurance case can be trusted is crucial to the success of the case. Researchers have proposed quantification of confidence as a Baconian probability ratio of eliminated concerns about the assurance case to the total number of identified concerns.

Executing Model-based Tests on Platform-specific Implementations

Model-based testing of embedded real-time systems is challenging because platform-specific details are often abstracted away to make the models amenable to various analyses. Testing an implementation to expose non-conformance to such a model requires reconciling differences arising from these abstractions. Due to stateful behavior, naive comparisons of model and system behaviors often fail causing numerous false positives.