University of Minnesota
Software Engineering Center
/

You are here

Tim Menzies, Ph.D.

Photo of Tim Menzies, Ph.D.
Biography: 
Tim Menzies (IEEE Fellow, Ph.D., UNSW, 1995) is a full Professor in CS at North Carolina State University where he teaches software engineering, automated software engineering, and foundations of software science. He is the director of the RAISE lab (real world AI for SE). that explores SE, data mining, AI, search-based SE, and open access science. He is the author of over 250 referred publications and editor of three recent books that summarized the state of the art in software analytics. In his career, he has been a lead researcher on projects for NSF, NIJ, DoD, NASA, USDA, as well as joint research work with private companies. For 2002 to 2004, he was the software engineering research chair at NASA's software Independent Verification and Validation Facility. Prof. Menzies is the co-founder of the PROMISE conference series devoted to reproducible experiments in software engineering (http://tiny.cc/seacraft). He is an associate editor of IEEE Transactions on Software Engineering, ACM Transactions on Software Engineering Methodologies, Empirical Software Engineering, the Automated Software Engineering Journal the Big Data Journal, Information Software Technology, IEEE Software, and the Software Quality Journal. In 2015, he served as co-chair for the ICSE'15 NIER track. He has served as co-general chair of ICSME'16 and co-PC-chair of SSBSE'17, and ASE'12.

For more, see his vita, his list of publications, or his home page.

Recent Publications

Sharing Experiments Using Open Source Software

When researchers want to repeat, improve or refute prior conclusions, it is useful to have a complete and operational description of prior experiments. If those descriptions are overly long or complex, then sharing their details may not be informative. OURMINE is a scripting environment for the development and deployment of data mining experiments. Using OURMINE, data mining novices can specify and execute intricate experiments, while researchers can publish their complete experimental rig alongside their conclusions. This is achievable because of OURMINE's succinctness.

Automatically Finding the Control Variables for Complex System Behavior

Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points.

When to Use Data from Other Projects for Effort Estimation

Collecting the data required for quality prediction within a development team is time-consuming and expensive. An alternative to make predictions using data that crosses from other projects or even other companies. We show that with/without relevancy filtering, imported data performs the same/worse (respectively) than using local data. Therefore, we recommend the use of relevancy filtering whenever generating estimates using data from another project.

Pages

Recent Presentations

SE for AI for SE

Much has been talked about the value of AI for software engineering, but what about the other way around? What can software engineering offer AI? This talk argues that AI software is, after all, software that must be built, validated, used by people, maintained, refactored, etc. And that as software engineers, we need to design AI software that has to offer at least the following services. The bad news is that our current AI software tools just ignore many of the above considerations. The good news is that it is a relatively easy matter to refactor our AI software tools such that