University of Minnesota
Software Engineering Center
/

You are here

Gregory Gay

Gregory Gay
Student/Research Assistant
Office Location: 
6-248 Keller Hall
Education: 
Ph.D. Computer Science, University of Minnesota, 2015
Advisor: Dr. Mats Heimdahl.
Thesis title: Steering Model-Based Oracles to Admit Real Program Behaviors.

M.S. Computer Science, West Virginia University, 2010.
Advisor: Dr. Tim Menzies.
Thesis title: Robust Optimization of Non-Linear Requirements Models.

B.S. Computer Science, West Virginia University, 2008.
Biography: 

Greg is an assistant professor of Computer Science & Engineering at University of South Carolina. He was previously is a PhD student and research assistant at University of Minnesota under a NSF Graduate Research Fellowship, working with the Critical Systems research group. He received his BS and MS in Computer Science from West Virginia University.

Additionally, Greg has previously interned at NASA's Ames Research Center and Independent Verification & Validation Center, and spent time as a visiting academic at the Chinese Academy of Sciences in Beijing.

Research: 

Greg's research is primarily in the areas of search-based software engineering and automated software testing and analysis, with an emphasis on aspects of the test oracle problem. His current research focus is on construction of effective test oracles for real-time and safety critical systems, including methods of selecting oracle data and making comparisons.

His approach to addressing research problems is based on a data-centric approach, forming an intersection between search, optimization, data mining, and artificial intelligence. He strives to harness the information content of software development artifacts to improve the efficiency and quality of the testing process and to automate tasks in order to lessen the burden on human testers.

His past research has largely focused on the application of search, optimization, and information retrieval techniques to various software engineering tasks, including model optimization, requirements engineering, effort estimation, defect detection, and the traceability between source code and defect reports.

Recent Publications

Implications of Ceiling Effects in Defect Predictors

Context: There are many methods that input static code features and output a predictor for faulty code modules. These data mining methods have hit a "performance ceiling"; i.e., some inherent upper bound on the amount of information offered by, say, static code features when identifying modules which contain faults. Objective: We seek an explanation for this ceiling effect. Perhaps static code features have "limited information content"; i.e. their information can be quickly and completely discovered by even simple learners.

How to Build Repeatable Experiments

The mantra of the PROMISE series is "repeatable, improvable, maybe refutable" software engineering experiments. This community has successfully created a library of reusable software engineering data sets. The next challenge in the PROMISE community will be to not only share data, but to share experiments. Our experience with existing data mining environments is that these tools are not suitable for publishing or sharing repeatable experiments.

Pages