University of Minnesota
Software Engineering Center
/

You are here

Tim Menzies, Ph.D.

Photo of Tim Menzies, Ph.D.
Biography: 
Tim Menzies (IEEE Fellow, Ph.D., UNSW, 1995) is a full Professor in CS at North Carolina State University where he teaches software engineering, automated software engineering, and foundations of software science. He is the director of the RAISE lab (real world AI for SE). that explores SE, data mining, AI, search-based SE, and open access science. He is the author of over 250 referred publications and editor of three recent books that summarized the state of the art in software analytics. In his career, he has been a lead researcher on projects for NSF, NIJ, DoD, NASA, USDA, as well as joint research work with private companies. For 2002 to 2004, he was the software engineering research chair at NASA's software Independent Verification and Validation Facility. Prof. Menzies is the co-founder of the PROMISE conference series devoted to reproducible experiments in software engineering (http://tiny.cc/seacraft). He is an associate editor of IEEE Transactions on Software Engineering, ACM Transactions on Software Engineering Methodologies, Empirical Software Engineering, the Automated Software Engineering Journal the Big Data Journal, Information Software Technology, IEEE Software, and the Software Quality Journal. In 2015, he served as co-chair for the ICSE'15 NIER track. He has served as co-general chair of ICSME'16 and co-PC-chair of SSBSE'17, and ASE'12.

For more, see his vita, his list of publications, or his home page.

Recent Publications

How to Build Repeatable Experiments

The mantra of the PROMISE series is "repeatable, improvable, maybe refutable" software engineering experiments. This community has successfully created a library of reusable software engineering data sets. The next challenge in the PROMISE community will be to not only share data, but to share experiments. Our experience with existing data mining environments is that these tools are not suitable for publishing or sharing repeatable experiments.

On the Advantages of Approximate vs. Complete Verification: Bigger Models, Faster, Less Memory, Usually Accurate.

As software grows increasingly complex, verification becomes more and more challenging. Automatic verification by model checking has been effective in many domains including computer hardware design, networking, security and telecommunications protocols, automated control systems and others. Many realworld software models, however, are too large for the available tools. The difficulty--how to verify large systems--is fundamentally a search issue: the global state space representing all possible behaviors of a complex software system is exponential in size.

Pages